Jan 22 07:50:07 np0005592157 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 07:50:07 np0005592157 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 07:50:07 np0005592157 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:50:07 np0005592157 kernel: BIOS-provided physical RAM map:
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 07:50:07 np0005592157 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 22 07:50:07 np0005592157 kernel: NX (Execute Disable) protection: active
Jan 22 07:50:07 np0005592157 kernel: APIC: Static calls initialized
Jan 22 07:50:07 np0005592157 kernel: SMBIOS 2.8 present.
Jan 22 07:50:07 np0005592157 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 22 07:50:07 np0005592157 kernel: Hypervisor detected: KVM
Jan 22 07:50:07 np0005592157 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 07:50:07 np0005592157 kernel: kvm-clock: using sched offset of 3384423131 cycles
Jan 22 07:50:07 np0005592157 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 07:50:07 np0005592157 kernel: tsc: Detected 2800.000 MHz processor
Jan 22 07:50:07 np0005592157 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 22 07:50:07 np0005592157 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 07:50:07 np0005592157 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 07:50:07 np0005592157 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 22 07:50:07 np0005592157 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 22 07:50:07 np0005592157 kernel: Using GB pages for direct mapping
Jan 22 07:50:07 np0005592157 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 07:50:07 np0005592157 kernel: ACPI: Early table checksum verification disabled
Jan 22 07:50:07 np0005592157 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 22 07:50:07 np0005592157 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:50:07 np0005592157 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:50:07 np0005592157 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:50:07 np0005592157 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 22 07:50:07 np0005592157 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:50:07 np0005592157 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:50:07 np0005592157 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 22 07:50:07 np0005592157 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 22 07:50:07 np0005592157 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 22 07:50:07 np0005592157 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 22 07:50:07 np0005592157 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 22 07:50:07 np0005592157 kernel: No NUMA configuration found
Jan 22 07:50:07 np0005592157 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 22 07:50:07 np0005592157 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 22 07:50:07 np0005592157 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 22 07:50:07 np0005592157 kernel: Zone ranges:
Jan 22 07:50:07 np0005592157 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 07:50:07 np0005592157 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 07:50:07 np0005592157 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:50:07 np0005592157 kernel:  Device   empty
Jan 22 07:50:07 np0005592157 kernel: Movable zone start for each node
Jan 22 07:50:07 np0005592157 kernel: Early memory node ranges
Jan 22 07:50:07 np0005592157 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 07:50:07 np0005592157 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 22 07:50:07 np0005592157 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:50:07 np0005592157 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 22 07:50:07 np0005592157 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 07:50:07 np0005592157 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 07:50:07 np0005592157 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 07:50:07 np0005592157 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 07:50:07 np0005592157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 07:50:07 np0005592157 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 07:50:07 np0005592157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 07:50:07 np0005592157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 07:50:07 np0005592157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 07:50:07 np0005592157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 07:50:07 np0005592157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 07:50:07 np0005592157 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 07:50:07 np0005592157 kernel: TSC deadline timer available
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Max. logical packages:   8
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Max. logical dies:       8
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Max. dies per package:   1
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Max. threads per core:   1
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Num. cores per package:     1
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Num. threads per package:   1
Jan 22 07:50:07 np0005592157 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 22 07:50:07 np0005592157 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 07:50:07 np0005592157 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 07:50:07 np0005592157 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 22 07:50:07 np0005592157 kernel: Booting paravirtualized kernel on KVM
Jan 22 07:50:07 np0005592157 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 07:50:07 np0005592157 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 22 07:50:07 np0005592157 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 22 07:50:07 np0005592157 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 22 07:50:07 np0005592157 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:50:07 np0005592157 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 07:50:07 np0005592157 kernel: random: crng init done
Jan 22 07:50:07 np0005592157 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: Fallback order for Node 0: 0 
Jan 22 07:50:07 np0005592157 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 07:50:07 np0005592157 kernel: Policy zone: Normal
Jan 22 07:50:07 np0005592157 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 07:50:07 np0005592157 kernel: software IO TLB: area num 8.
Jan 22 07:50:07 np0005592157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 07:50:07 np0005592157 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 07:50:07 np0005592157 kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 07:50:07 np0005592157 kernel: Dynamic Preempt: voluntary
Jan 22 07:50:07 np0005592157 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 07:50:07 np0005592157 kernel: rcu: #011RCU event tracing is enabled.
Jan 22 07:50:07 np0005592157 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 22 07:50:07 np0005592157 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 22 07:50:07 np0005592157 kernel: #011Rude variant of Tasks RCU enabled.
Jan 22 07:50:07 np0005592157 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 22 07:50:07 np0005592157 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 07:50:07 np0005592157 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 22 07:50:07 np0005592157 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:50:07 np0005592157 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:50:07 np0005592157 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:50:07 np0005592157 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 22 07:50:07 np0005592157 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 07:50:07 np0005592157 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 07:50:07 np0005592157 kernel: Console: colour VGA+ 80x25
Jan 22 07:50:07 np0005592157 kernel: printk: console [ttyS0] enabled
Jan 22 07:50:07 np0005592157 kernel: ACPI: Core revision 20230331
Jan 22 07:50:07 np0005592157 kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 07:50:07 np0005592157 kernel: x2apic enabled
Jan 22 07:50:07 np0005592157 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 07:50:07 np0005592157 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 07:50:07 np0005592157 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 22 07:50:07 np0005592157 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 07:50:07 np0005592157 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 07:50:07 np0005592157 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 07:50:07 np0005592157 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 07:50:07 np0005592157 kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 07:50:07 np0005592157 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 07:50:07 np0005592157 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 22 07:50:07 np0005592157 kernel: RETBleed: Mitigation: untrained return thunk
Jan 22 07:50:07 np0005592157 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 07:50:07 np0005592157 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 07:50:07 np0005592157 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 07:50:07 np0005592157 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 07:50:07 np0005592157 kernel: x86/bugs: return thunk changed
Jan 22 07:50:07 np0005592157 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 07:50:07 np0005592157 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 07:50:07 np0005592157 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 07:50:07 np0005592157 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 07:50:07 np0005592157 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 07:50:07 np0005592157 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 22 07:50:07 np0005592157 kernel: Freeing SMP alternatives memory: 40K
Jan 22 07:50:07 np0005592157 kernel: pid_max: default: 32768 minimum: 301
Jan 22 07:50:07 np0005592157 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 07:50:07 np0005592157 kernel: landlock: Up and running.
Jan 22 07:50:07 np0005592157 kernel: Yama: becoming mindful.
Jan 22 07:50:07 np0005592157 kernel: SELinux:  Initializing.
Jan 22 07:50:07 np0005592157 kernel: LSM support for eBPF active
Jan 22 07:50:07 np0005592157 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 22 07:50:07 np0005592157 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 07:50:07 np0005592157 kernel: ... version:                0
Jan 22 07:50:07 np0005592157 kernel: ... bit width:              48
Jan 22 07:50:07 np0005592157 kernel: ... generic registers:      6
Jan 22 07:50:07 np0005592157 kernel: ... value mask:             0000ffffffffffff
Jan 22 07:50:07 np0005592157 kernel: ... max period:             00007fffffffffff
Jan 22 07:50:07 np0005592157 kernel: ... fixed-purpose events:   0
Jan 22 07:50:07 np0005592157 kernel: ... event mask:             000000000000003f
Jan 22 07:50:07 np0005592157 kernel: signal: max sigframe size: 1776
Jan 22 07:50:07 np0005592157 kernel: rcu: Hierarchical SRCU implementation.
Jan 22 07:50:07 np0005592157 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 22 07:50:07 np0005592157 kernel: smp: Bringing up secondary CPUs ...
Jan 22 07:50:07 np0005592157 kernel: smpboot: x86: Booting SMP configuration:
Jan 22 07:50:07 np0005592157 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 22 07:50:07 np0005592157 kernel: smp: Brought up 1 node, 8 CPUs
Jan 22 07:50:07 np0005592157 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 22 07:50:07 np0005592157 kernel: node 0 deferred pages initialised in 17ms
Jan 22 07:50:07 np0005592157 kernel: Memory: 7763836K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 22 07:50:07 np0005592157 kernel: devtmpfs: initialized
Jan 22 07:50:07 np0005592157 kernel: x86/mm: Memory block size: 128MB
Jan 22 07:50:07 np0005592157 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 07:50:07 np0005592157 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 22 07:50:07 np0005592157 kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 07:50:07 np0005592157 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 07:50:07 np0005592157 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 07:50:07 np0005592157 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 07:50:07 np0005592157 kernel: audit: initializing netlink subsys (disabled)
Jan 22 07:50:07 np0005592157 kernel: audit: type=2000 audit(1769086204.925:1): state=initialized audit_enabled=0 res=1
Jan 22 07:50:07 np0005592157 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 07:50:07 np0005592157 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 07:50:07 np0005592157 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 07:50:07 np0005592157 kernel: cpuidle: using governor menu
Jan 22 07:50:07 np0005592157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 07:50:07 np0005592157 kernel: PCI: Using configuration type 1 for base access
Jan 22 07:50:07 np0005592157 kernel: PCI: Using configuration type 1 for extended access
Jan 22 07:50:07 np0005592157 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 07:50:07 np0005592157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 07:50:07 np0005592157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 07:50:07 np0005592157 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 07:50:07 np0005592157 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 07:50:07 np0005592157 kernel: Demotion targets for Node 0: null
Jan 22 07:50:07 np0005592157 kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 07:50:07 np0005592157 kernel: ACPI: Added _OSI(Module Device)
Jan 22 07:50:07 np0005592157 kernel: ACPI: Added _OSI(Processor Device)
Jan 22 07:50:07 np0005592157 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 07:50:07 np0005592157 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 07:50:07 np0005592157 kernel: ACPI: Interpreter enabled
Jan 22 07:50:07 np0005592157 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 22 07:50:07 np0005592157 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 07:50:07 np0005592157 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 07:50:07 np0005592157 kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 07:50:07 np0005592157 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 07:50:07 np0005592157 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [3] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [4] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [5] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [6] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [7] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [8] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [9] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [10] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [11] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [12] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [13] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [14] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [15] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [16] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [17] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [18] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [19] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [20] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [21] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [22] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [23] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [24] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [25] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [26] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [27] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [28] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [29] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [30] registered
Jan 22 07:50:07 np0005592157 kernel: acpiphp: Slot [31] registered
Jan 22 07:50:07 np0005592157 kernel: PCI host bridge to bus 0000:00
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 07:50:07 np0005592157 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 22 07:50:07 np0005592157 kernel: iommu: Default domain type: Translated
Jan 22 07:50:07 np0005592157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 07:50:07 np0005592157 kernel: SCSI subsystem initialized
Jan 22 07:50:07 np0005592157 kernel: ACPI: bus type USB registered
Jan 22 07:50:07 np0005592157 kernel: usbcore: registered new interface driver usbfs
Jan 22 07:50:07 np0005592157 kernel: usbcore: registered new interface driver hub
Jan 22 07:50:07 np0005592157 kernel: usbcore: registered new device driver usb
Jan 22 07:50:07 np0005592157 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 07:50:07 np0005592157 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 07:50:07 np0005592157 kernel: PTP clock support registered
Jan 22 07:50:07 np0005592157 kernel: EDAC MC: Ver: 3.0.0
Jan 22 07:50:07 np0005592157 kernel: NetLabel: Initializing
Jan 22 07:50:07 np0005592157 kernel: NetLabel:  domain hash size = 128
Jan 22 07:50:07 np0005592157 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 07:50:07 np0005592157 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 07:50:07 np0005592157 kernel: PCI: Using ACPI for IRQ routing
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 07:50:07 np0005592157 kernel: vgaarb: loaded
Jan 22 07:50:07 np0005592157 kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 07:50:07 np0005592157 kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 07:50:07 np0005592157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 07:50:07 np0005592157 kernel: pnp: PnP ACPI init
Jan 22 07:50:07 np0005592157 kernel: pnp: PnP ACPI: found 5 devices
Jan 22 07:50:07 np0005592157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_INET protocol family
Jan 22 07:50:07 np0005592157 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 07:50:07 np0005592157 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_XDP protocol family
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 22 07:50:07 np0005592157 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 22 07:50:07 np0005592157 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 22 07:50:07 np0005592157 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 88149 usecs
Jan 22 07:50:07 np0005592157 kernel: PCI: CLS 0 bytes, default 64
Jan 22 07:50:07 np0005592157 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 07:50:07 np0005592157 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 22 07:50:07 np0005592157 kernel: Trying to unpack rootfs image as initramfs...
Jan 22 07:50:07 np0005592157 kernel: ACPI: bus type thunderbolt registered
Jan 22 07:50:07 np0005592157 kernel: Initialise system trusted keyrings
Jan 22 07:50:07 np0005592157 kernel: Key type blacklist registered
Jan 22 07:50:07 np0005592157 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 07:50:07 np0005592157 kernel: zbud: loaded
Jan 22 07:50:07 np0005592157 kernel: integrity: Platform Keyring initialized
Jan 22 07:50:07 np0005592157 kernel: integrity: Machine keyring initialized
Jan 22 07:50:07 np0005592157 kernel: Freeing initrd memory: 87956K
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_ALG protocol family
Jan 22 07:50:07 np0005592157 kernel: xor: automatically using best checksumming function   avx       
Jan 22 07:50:07 np0005592157 kernel: Key type asymmetric registered
Jan 22 07:50:07 np0005592157 kernel: Asymmetric key parser 'x509' registered
Jan 22 07:50:07 np0005592157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 07:50:07 np0005592157 kernel: io scheduler mq-deadline registered
Jan 22 07:50:07 np0005592157 kernel: io scheduler kyber registered
Jan 22 07:50:07 np0005592157 kernel: io scheduler bfq registered
Jan 22 07:50:07 np0005592157 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 07:50:07 np0005592157 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 07:50:07 np0005592157 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 07:50:07 np0005592157 kernel: ACPI: button: Power Button [PWRF]
Jan 22 07:50:07 np0005592157 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 22 07:50:07 np0005592157 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 22 07:50:07 np0005592157 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 22 07:50:07 np0005592157 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 07:50:07 np0005592157 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 07:50:07 np0005592157 kernel: Non-volatile memory driver v1.3
Jan 22 07:50:07 np0005592157 kernel: rdac: device handler registered
Jan 22 07:50:07 np0005592157 kernel: hp_sw: device handler registered
Jan 22 07:50:07 np0005592157 kernel: emc: device handler registered
Jan 22 07:50:07 np0005592157 kernel: alua: device handler registered
Jan 22 07:50:07 np0005592157 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 22 07:50:07 np0005592157 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 22 07:50:07 np0005592157 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 22 07:50:07 np0005592157 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 22 07:50:07 np0005592157 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 07:50:07 np0005592157 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 07:50:07 np0005592157 kernel: usb usb1: Product: UHCI Host Controller
Jan 22 07:50:07 np0005592157 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 07:50:07 np0005592157 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 22 07:50:07 np0005592157 kernel: hub 1-0:1.0: USB hub found
Jan 22 07:50:07 np0005592157 kernel: hub 1-0:1.0: 2 ports detected
Jan 22 07:50:07 np0005592157 kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 07:50:07 np0005592157 kernel: usbserial: USB Serial support registered for generic
Jan 22 07:50:07 np0005592157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 07:50:07 np0005592157 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 07:50:07 np0005592157 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 07:50:07 np0005592157 kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 07:50:07 np0005592157 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 22 07:50:07 np0005592157 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 07:50:07 np0005592157 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 07:50:07 np0005592157 kernel: rtc_cmos 00:04: registered as rtc0
Jan 22 07:50:07 np0005592157 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 07:50:07 np0005592157 kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T12:50:06 UTC (1769086206)
Jan 22 07:50:07 np0005592157 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 22 07:50:07 np0005592157 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 07:50:07 np0005592157 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 07:50:07 np0005592157 kernel: usbcore: registered new interface driver usbhid
Jan 22 07:50:07 np0005592157 kernel: usbhid: USB HID core driver
Jan 22 07:50:07 np0005592157 kernel: drop_monitor: Initializing network drop monitor service
Jan 22 07:50:07 np0005592157 kernel: Initializing XFRM netlink socket
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_INET6 protocol family
Jan 22 07:50:07 np0005592157 kernel: Segment Routing with IPv6
Jan 22 07:50:07 np0005592157 kernel: NET: Registered PF_PACKET protocol family
Jan 22 07:50:07 np0005592157 kernel: mpls_gso: MPLS GSO support
Jan 22 07:50:07 np0005592157 kernel: IPI shorthand broadcast: enabled
Jan 22 07:50:07 np0005592157 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 07:50:07 np0005592157 kernel: AES CTR mode by8 optimization enabled
Jan 22 07:50:07 np0005592157 kernel: sched_clock: Marking stable (1534004730, 146845780)->(1876190819, -195340309)
Jan 22 07:50:07 np0005592157 kernel: registered taskstats version 1
Jan 22 07:50:07 np0005592157 kernel: Loading compiled-in X.509 certificates
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 07:50:07 np0005592157 kernel: Demotion targets for Node 0: null
Jan 22 07:50:07 np0005592157 kernel: page_owner is disabled
Jan 22 07:50:07 np0005592157 kernel: Key type .fscrypt registered
Jan 22 07:50:07 np0005592157 kernel: Key type fscrypt-provisioning registered
Jan 22 07:50:07 np0005592157 kernel: Key type big_key registered
Jan 22 07:50:07 np0005592157 kernel: Key type encrypted registered
Jan 22 07:50:07 np0005592157 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 07:50:07 np0005592157 kernel: Loading compiled-in module X.509 certificates
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:50:07 np0005592157 kernel: ima: Allocated hash algorithm: sha256
Jan 22 07:50:07 np0005592157 kernel: ima: No architecture policies found
Jan 22 07:50:07 np0005592157 kernel: evm: Initialising EVM extended attributes:
Jan 22 07:50:07 np0005592157 kernel: evm: security.selinux
Jan 22 07:50:07 np0005592157 kernel: evm: security.SMACK64 (disabled)
Jan 22 07:50:07 np0005592157 kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 07:50:07 np0005592157 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 07:50:07 np0005592157 kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 07:50:07 np0005592157 kernel: evm: security.apparmor (disabled)
Jan 22 07:50:07 np0005592157 kernel: evm: security.ima
Jan 22 07:50:07 np0005592157 kernel: evm: security.capability
Jan 22 07:50:07 np0005592157 kernel: evm: HMAC attrs: 0x1
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 07:50:07 np0005592157 kernel: Running certificate verification RSA selftest
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 07:50:07 np0005592157 kernel: Running certificate verification ECDSA selftest
Jan 22 07:50:07 np0005592157 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 07:50:07 np0005592157 kernel: clk: Disabling unused clocks
Jan 22 07:50:07 np0005592157 kernel: Freeing unused decrypted memory: 2028K
Jan 22 07:50:07 np0005592157 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 07:50:07 np0005592157 kernel: Write protecting the kernel read-only data: 30720k
Jan 22 07:50:07 np0005592157 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 07:50:07 np0005592157 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 07:50:07 np0005592157 kernel: Run /init as init process
Jan 22 07:50:07 np0005592157 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:50:07 np0005592157 systemd: Detected virtualization kvm.
Jan 22 07:50:07 np0005592157 systemd: Detected architecture x86-64.
Jan 22 07:50:07 np0005592157 systemd: Running in initrd.
Jan 22 07:50:07 np0005592157 systemd: No hostname configured, using default hostname.
Jan 22 07:50:07 np0005592157 systemd: Hostname set to <localhost>.
Jan 22 07:50:07 np0005592157 systemd: Initializing machine ID from VM UUID.
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: Manufacturer: QEMU
Jan 22 07:50:07 np0005592157 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 22 07:50:07 np0005592157 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 07:50:07 np0005592157 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 22 07:50:07 np0005592157 systemd: Queued start job for default target Initrd Default Target.
Jan 22 07:50:07 np0005592157 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:50:07 np0005592157 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:50:07 np0005592157 systemd: Reached target Initrd /usr File System.
Jan 22 07:50:07 np0005592157 systemd: Reached target Local File Systems.
Jan 22 07:50:07 np0005592157 systemd: Reached target Path Units.
Jan 22 07:50:07 np0005592157 systemd: Reached target Slice Units.
Jan 22 07:50:07 np0005592157 systemd: Reached target Swaps.
Jan 22 07:50:07 np0005592157 systemd: Reached target Timer Units.
Jan 22 07:50:07 np0005592157 systemd: Listening on D-Bus System Message Bus Socket.
Jan 22 07:50:07 np0005592157 systemd: Listening on Journal Socket (/dev/log).
Jan 22 07:50:07 np0005592157 systemd: Listening on Journal Socket.
Jan 22 07:50:07 np0005592157 systemd: Listening on udev Control Socket.
Jan 22 07:50:07 np0005592157 systemd: Listening on udev Kernel Socket.
Jan 22 07:50:07 np0005592157 systemd: Reached target Socket Units.
Jan 22 07:50:07 np0005592157 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:50:07 np0005592157 systemd: Starting Journal Service...
Jan 22 07:50:07 np0005592157 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:50:07 np0005592157 systemd: Starting Apply Kernel Variables...
Jan 22 07:50:07 np0005592157 systemd: Starting Create System Users...
Jan 22 07:50:07 np0005592157 systemd: Starting Setup Virtual Console...
Jan 22 07:50:07 np0005592157 systemd: Finished Create List of Static Device Nodes.
Jan 22 07:50:07 np0005592157 systemd: Finished Apply Kernel Variables.
Jan 22 07:50:07 np0005592157 systemd: Finished Create System Users.
Jan 22 07:50:07 np0005592157 systemd-journald[308]: Journal started
Jan 22 07:50:07 np0005592157 systemd-journald[308]: Runtime Journal (/run/log/journal/f2612c2e5bb249d69db033d2b0e700a7) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:50:07 np0005592157 systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 22 07:50:07 np0005592157 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 22 07:50:07 np0005592157 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 07:50:07 np0005592157 systemd: Started Journal Service.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 07:50:07 np0005592157 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:50:07 np0005592157 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:50:07 np0005592157 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:50:07 np0005592157 systemd[1]: Finished Setup Virtual Console.
Jan 22 07:50:07 np0005592157 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting dracut cmdline hook...
Jan 22 07:50:07 np0005592157 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 07:50:07 np0005592157 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:50:07 np0005592157 systemd[1]: Finished dracut cmdline hook.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting dracut pre-udev hook...
Jan 22 07:50:07 np0005592157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 07:50:07 np0005592157 kernel: device-mapper: uevent: version 1.0.3
Jan 22 07:50:07 np0005592157 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 07:50:07 np0005592157 kernel: RPC: Registered named UNIX socket transport module.
Jan 22 07:50:07 np0005592157 kernel: RPC: Registered udp transport module.
Jan 22 07:50:07 np0005592157 kernel: RPC: Registered tcp transport module.
Jan 22 07:50:07 np0005592157 kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 07:50:07 np0005592157 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 07:50:07 np0005592157 rpc.statd[442]: Version 2.5.4 starting
Jan 22 07:50:07 np0005592157 rpc.statd[442]: Initializing NSM state
Jan 22 07:50:07 np0005592157 rpc.idmapd[447]: Setting log level to 0
Jan 22 07:50:07 np0005592157 systemd[1]: Finished dracut pre-udev hook.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:50:07 np0005592157 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:50:07 np0005592157 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting dracut pre-trigger hook...
Jan 22 07:50:07 np0005592157 systemd[1]: Finished dracut pre-trigger hook.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting Coldplug All udev Devices...
Jan 22 07:50:07 np0005592157 systemd[1]: Created slice Slice /system/modprobe.
Jan 22 07:50:07 np0005592157 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:50:07 np0005592157 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:50:07 np0005592157 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:50:07 np0005592157 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:50:07 np0005592157 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:50:07 np0005592157 systemd[1]: Reached target Network.
Jan 22 07:50:07 np0005592157 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:50:07 np0005592157 systemd[1]: Starting dracut initqueue hook...
Jan 22 07:50:08 np0005592157 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 22 07:50:08 np0005592157 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 07:50:08 np0005592157 kernel: scsi host0: ata_piix
Jan 22 07:50:08 np0005592157 kernel: vda: vda1
Jan 22 07:50:08 np0005592157 kernel: scsi host1: ata_piix
Jan 22 07:50:08 np0005592157 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 22 07:50:08 np0005592157 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 22 07:50:08 np0005592157 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Initrd Root Device.
Jan 22 07:50:08 np0005592157 systemd[1]: Mounting Kernel Configuration File System...
Jan 22 07:50:08 np0005592157 kernel: ata1: found unknown device (class 0)
Jan 22 07:50:08 np0005592157 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 07:50:08 np0005592157 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 07:50:08 np0005592157 systemd[1]: Mounted Kernel Configuration File System.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target System Initialization.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Basic System.
Jan 22 07:50:08 np0005592157 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:50:08 np0005592157 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 07:50:08 np0005592157 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 07:50:08 np0005592157 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 07:50:08 np0005592157 systemd[1]: Finished dracut initqueue hook.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Remote File Systems.
Jan 22 07:50:08 np0005592157 systemd[1]: Starting dracut pre-mount hook...
Jan 22 07:50:08 np0005592157 systemd[1]: Finished dracut pre-mount hook.
Jan 22 07:50:08 np0005592157 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 07:50:08 np0005592157 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 07:50:08 np0005592157 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:50:08 np0005592157 systemd[1]: Mounting /sysroot...
Jan 22 07:50:08 np0005592157 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 07:50:08 np0005592157 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 07:50:08 np0005592157 kernel: XFS (vda1): Ending clean mount
Jan 22 07:50:08 np0005592157 systemd[1]: Mounted /sysroot.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Initrd Root File System.
Jan 22 07:50:08 np0005592157 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 07:50:08 np0005592157 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 07:50:08 np0005592157 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Initrd File Systems.
Jan 22 07:50:08 np0005592157 systemd[1]: Reached target Initrd Default Target.
Jan 22 07:50:08 np0005592157 systemd[1]: Starting dracut mount hook...
Jan 22 07:50:08 np0005592157 systemd[1]: Finished dracut mount hook.
Jan 22 07:50:08 np0005592157 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 07:50:09 np0005592157 rpc.idmapd[447]: exiting on signal 15
Jan 22 07:50:09 np0005592157 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Network.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Timer Units.
Jan 22 07:50:09 np0005592157 systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Initrd Default Target.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Basic System.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Initrd Root Device.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Initrd /usr File System.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Path Units.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Remote File Systems.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Slice Units.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Socket Units.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target System Initialization.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Local File Systems.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Swaps.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut mount hook.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut pre-mount hook.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut initqueue hook.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Setup Virtual Console.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 07:50:09 np0005592157 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Closed udev Control Socket.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Closed udev Kernel Socket.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut pre-udev hook.
Jan 22 07:50:09 np0005592157 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped dracut cmdline hook.
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Cleanup udev Database...
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 07:50:09 np0005592157 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Stopped Create System Users.
Jan 22 07:50:09 np0005592157 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Cleanup udev Database.
Jan 22 07:50:09 np0005592157 systemd[1]: Reached target Switch Root.
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Switch Root...
Jan 22 07:50:09 np0005592157 systemd[1]: Switching root.
Jan 22 07:50:09 np0005592157 systemd-journald[308]: Journal stopped
Jan 22 07:50:09 np0005592157 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 22 07:50:09 np0005592157 kernel: audit: type=1404 audit(1769086209.273:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 07:50:09 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 07:50:09 np0005592157 kernel: audit: type=1403 audit(1769086209.395:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 07:50:09 np0005592157 systemd: Successfully loaded SELinux policy in 124.184ms.
Jan 22 07:50:09 np0005592157 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.892ms.
Jan 22 07:50:09 np0005592157 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:50:09 np0005592157 systemd: Detected virtualization kvm.
Jan 22 07:50:09 np0005592157 systemd: Detected architecture x86-64.
Jan 22 07:50:09 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 07:50:09 np0005592157 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd: Stopped Switch Root.
Jan 22 07:50:09 np0005592157 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 07:50:09 np0005592157 systemd: Created slice Slice /system/getty.
Jan 22 07:50:09 np0005592157 systemd: Created slice Slice /system/serial-getty.
Jan 22 07:50:09 np0005592157 systemd: Created slice Slice /system/sshd-keygen.
Jan 22 07:50:09 np0005592157 systemd: Created slice User and Session Slice.
Jan 22 07:50:09 np0005592157 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:50:09 np0005592157 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 22 07:50:09 np0005592157 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 07:50:09 np0005592157 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:50:09 np0005592157 systemd: Stopped target Switch Root.
Jan 22 07:50:09 np0005592157 systemd: Stopped target Initrd File Systems.
Jan 22 07:50:09 np0005592157 systemd: Stopped target Initrd Root File System.
Jan 22 07:50:09 np0005592157 systemd: Reached target Local Integrity Protected Volumes.
Jan 22 07:50:09 np0005592157 systemd: Reached target Path Units.
Jan 22 07:50:09 np0005592157 systemd: Reached target rpc_pipefs.target.
Jan 22 07:50:09 np0005592157 systemd: Reached target Slice Units.
Jan 22 07:50:09 np0005592157 systemd: Reached target Swaps.
Jan 22 07:50:09 np0005592157 systemd: Reached target Local Verity Protected Volumes.
Jan 22 07:50:09 np0005592157 systemd: Listening on RPCbind Server Activation Socket.
Jan 22 07:50:09 np0005592157 systemd: Reached target RPC Port Mapper.
Jan 22 07:50:09 np0005592157 systemd: Listening on Process Core Dump Socket.
Jan 22 07:50:09 np0005592157 systemd: Listening on initctl Compatibility Named Pipe.
Jan 22 07:50:09 np0005592157 systemd: Listening on udev Control Socket.
Jan 22 07:50:09 np0005592157 systemd: Listening on udev Kernel Socket.
Jan 22 07:50:09 np0005592157 systemd: Mounting Huge Pages File System...
Jan 22 07:50:09 np0005592157 systemd: Mounting POSIX Message Queue File System...
Jan 22 07:50:09 np0005592157 systemd: Mounting Kernel Debug File System...
Jan 22 07:50:09 np0005592157 systemd: Mounting Kernel Trace File System...
Jan 22 07:50:09 np0005592157 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:50:09 np0005592157 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:50:09 np0005592157 systemd: Starting Load Kernel Module configfs...
Jan 22 07:50:09 np0005592157 systemd: Starting Load Kernel Module drm...
Jan 22 07:50:09 np0005592157 systemd: Starting Load Kernel Module efi_pstore...
Jan 22 07:50:09 np0005592157 systemd: Starting Load Kernel Module fuse...
Jan 22 07:50:09 np0005592157 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 07:50:09 np0005592157 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd: Stopped File System Check on Root Device.
Jan 22 07:50:09 np0005592157 systemd: Stopped Journal Service.
Jan 22 07:50:09 np0005592157 systemd: Starting Journal Service...
Jan 22 07:50:09 np0005592157 kernel: ACPI: bus type drm_connector registered
Jan 22 07:50:09 np0005592157 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:50:09 np0005592157 systemd: Starting Generate network units from Kernel command line...
Jan 22 07:50:09 np0005592157 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:50:09 np0005592157 systemd: Starting Remount Root and Kernel File Systems...
Jan 22 07:50:09 np0005592157 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 07:50:09 np0005592157 systemd: Starting Apply Kernel Variables...
Jan 22 07:50:09 np0005592157 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 07:50:09 np0005592157 kernel: fuse: init (API version 7.37)
Jan 22 07:50:09 np0005592157 systemd: Starting Coldplug All udev Devices...
Jan 22 07:50:09 np0005592157 systemd: Mounted Huge Pages File System.
Jan 22 07:50:09 np0005592157 systemd: Mounted POSIX Message Queue File System.
Jan 22 07:50:09 np0005592157 systemd: Mounted Kernel Debug File System.
Jan 22 07:50:09 np0005592157 systemd: Mounted Kernel Trace File System.
Jan 22 07:50:09 np0005592157 systemd: Finished Create List of Static Device Nodes.
Jan 22 07:50:09 np0005592157 systemd-journald[680]: Journal started
Jan 22 07:50:09 np0005592157 systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:50:09 np0005592157 systemd: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Queued start job for default target Multi-User System.
Jan 22 07:50:09 np0005592157 systemd: Finished Load Kernel Module configfs.
Jan 22 07:50:09 np0005592157 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd: Started Journal Service.
Jan 22 07:50:09 np0005592157 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Load Kernel Module drm.
Jan 22 07:50:09 np0005592157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 07:50:09 np0005592157 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Load Kernel Module fuse.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Apply Kernel Variables.
Jan 22 07:50:09 np0005592157 systemd[1]: Mounting FUSE Control File System...
Jan 22 07:50:09 np0005592157 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Rebuild Hardware Database...
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 07:50:09 np0005592157 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 07:50:09 np0005592157 systemd[1]: Starting Create System Users...
Jan 22 07:50:09 np0005592157 systemd[1]: Mounted FUSE Control File System.
Jan 22 07:50:09 np0005592157 systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:50:09 np0005592157 systemd-journald[680]: Received client request to flush runtime journal.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:50:09 np0005592157 systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 07:50:09 np0005592157 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Create System Users.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target Local File Systems.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 07:50:10 np0005592157 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 07:50:10 np0005592157 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 07:50:10 np0005592157 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 07:50:10 np0005592157 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:50:10 np0005592157 bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Security Auditing Service...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting RPC Bind...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 07:50:10 np0005592157 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 07:50:10 np0005592157 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 07:50:10 np0005592157 systemd[1]: Started RPC Bind.
Jan 22 07:50:10 np0005592157 augenrules[708]: /sbin/augenrules: No change
Jan 22 07:50:10 np0005592157 augenrules[723]: No rules
Jan 22 07:50:10 np0005592157 augenrules[723]: enabled 1
Jan 22 07:50:10 np0005592157 augenrules[723]: failure 1
Jan 22 07:50:10 np0005592157 augenrules[723]: pid 703
Jan 22 07:50:10 np0005592157 augenrules[723]: rate_limit 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_limit 8192
Jan 22 07:50:10 np0005592157 augenrules[723]: lost 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time 60000
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time_actual 0
Jan 22 07:50:10 np0005592157 augenrules[723]: enabled 1
Jan 22 07:50:10 np0005592157 augenrules[723]: failure 1
Jan 22 07:50:10 np0005592157 augenrules[723]: pid 703
Jan 22 07:50:10 np0005592157 augenrules[723]: rate_limit 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_limit 8192
Jan 22 07:50:10 np0005592157 augenrules[723]: lost 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time 60000
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time_actual 0
Jan 22 07:50:10 np0005592157 augenrules[723]: enabled 1
Jan 22 07:50:10 np0005592157 augenrules[723]: failure 1
Jan 22 07:50:10 np0005592157 augenrules[723]: pid 703
Jan 22 07:50:10 np0005592157 augenrules[723]: rate_limit 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_limit 8192
Jan 22 07:50:10 np0005592157 augenrules[723]: lost 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog 0
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time 60000
Jan 22 07:50:10 np0005592157 augenrules[723]: backlog_wait_time_actual 0
Jan 22 07:50:10 np0005592157 systemd[1]: Started Security Auditing Service.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Rebuild Hardware Database.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Update is Completed...
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Update is Completed.
Jan 22 07:50:10 np0005592157 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:50:10 np0005592157 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target System Initialization.
Jan 22 07:50:10 np0005592157 systemd[1]: Started dnf makecache --timer.
Jan 22 07:50:10 np0005592157 systemd[1]: Started Daily rotation of log files.
Jan 22 07:50:10 np0005592157 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target Timer Units.
Jan 22 07:50:10 np0005592157 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 07:50:10 np0005592157 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target Socket Units.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting D-Bus System Message Bus...
Jan 22 07:50:10 np0005592157 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:50:10 np0005592157 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:50:10 np0005592157 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:50:10 np0005592157 systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:50:10 np0005592157 systemd[1]: Started D-Bus System Message Bus.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target Basic System.
Jan 22 07:50:10 np0005592157 dbus-broker-lau[756]: Ready
Jan 22 07:50:10 np0005592157 systemd[1]: Starting NTP client/server...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 07:50:10 np0005592157 systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 07:50:10 np0005592157 systemd[1]: Started irqbalance daemon.
Jan 22 07:50:10 np0005592157 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 07:50:10 np0005592157 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:50:10 np0005592157 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:50:10 np0005592157 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target sshd-keygen.target.
Jan 22 07:50:10 np0005592157 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 07:50:10 np0005592157 systemd[1]: Reached target User and Group Name Lookups.
Jan 22 07:50:10 np0005592157 systemd[1]: Starting User Login Management...
Jan 22 07:50:10 np0005592157 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 07:50:10 np0005592157 chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 07:50:10 np0005592157 chronyd[793]: Loaded 0 symmetric keys
Jan 22 07:50:10 np0005592157 chronyd[793]: Using right/UTC timezone to obtain leap second data
Jan 22 07:50:10 np0005592157 chronyd[793]: Loaded seccomp filter (level 2)
Jan 22 07:50:10 np0005592157 systemd[1]: Started NTP client/server.
Jan 22 07:50:10 np0005592157 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 07:50:10 np0005592157 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 07:50:10 np0005592157 systemd-logind[785]: New seat seat0.
Jan 22 07:50:10 np0005592157 systemd[1]: Started User Login Management.
Jan 22 07:50:10 np0005592157 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 07:50:10 np0005592157 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 22 07:50:10 np0005592157 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 07:50:10 np0005592157 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 07:50:10 np0005592157 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 07:50:10 np0005592157 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 07:50:11 np0005592157 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 22 07:50:11 np0005592157 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 22 07:50:11 np0005592157 kernel: Console: switching to colour dummy device 80x25
Jan 22 07:50:11 np0005592157 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 07:50:11 np0005592157 kernel: [drm] features: -context_init
Jan 22 07:50:11 np0005592157 kernel: [drm] number of scanouts: 1
Jan 22 07:50:11 np0005592157 kernel: [drm] number of cap sets: 0
Jan 22 07:50:11 np0005592157 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 22 07:50:11 np0005592157 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 07:50:11 np0005592157 kernel: Console: switching to colour frame buffer device 128x48
Jan 22 07:50:11 np0005592157 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 07:50:11 np0005592157 kernel: kvm_amd: TSC scaling supported
Jan 22 07:50:11 np0005592157 kernel: kvm_amd: Nested Virtualization enabled
Jan 22 07:50:11 np0005592157 kernel: kvm_amd: Nested Paging enabled
Jan 22 07:50:11 np0005592157 kernel: kvm_amd: LBR virtualization supported
Jan 22 07:50:11 np0005592157 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Jan 22 07:50:11 np0005592157 systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 07:50:11 np0005592157 cloud-init[839]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 12:50:11 +0000. Up 6.27 seconds.
Jan 22 07:50:11 np0005592157 systemd[1]: run-cloud\x2dinit-tmp-tmpa9hm8klw.mount: Deactivated successfully.
Jan 22 07:50:11 np0005592157 systemd[1]: Starting Hostname Service...
Jan 22 07:50:11 np0005592157 systemd[1]: Started Hostname Service.
Jan 22 07:50:11 np0005592157 systemd-hostnamed[853]: Hostname set to <np0005592157.novalocal> (static)
Jan 22 07:50:11 np0005592157 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 07:50:11 np0005592157 systemd[1]: Reached target Preparation for Network.
Jan 22 07:50:11 np0005592157 systemd[1]: Starting Network Manager...
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8102] NetworkManager (version 1.54.3-2.el9) is starting... (boot:ab3239db-3271-4bdd-a6d4-5ceb67d83a2c)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8108] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8182] manager[0x55e4c9c05000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8218] hostname: hostname: using hostnamed
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8218] hostname: static hostname changed from (none) to "np0005592157.novalocal"
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8223] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8324] manager[0x55e4c9c05000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8324] manager[0x55e4c9c05000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8363] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8364] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8364] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8365] manager: Networking is enabled by state file
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8367] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:50:11 np0005592157 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8375] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8394] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8405] dhcp: init: Using DHCP client 'internal'
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8408] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8422] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8429] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8437] device (lo): Activation: starting connection 'lo' (04c4e722-12df-49cb-b7ee-622fbd23b757)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8445] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8448] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8476] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8481] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8484] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8486] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8488] device (eth0): carrier: link connected
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8492] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8499] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8504] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8507] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8508] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8510] manager: NetworkManager state is now CONNECTING
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8511] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8517] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8521] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8581] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8588] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:50:11 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8607] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 systemd[1]: Started Network Manager.
Jan 22 07:50:11 np0005592157 systemd[1]: Reached target Network.
Jan 22 07:50:11 np0005592157 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:50:11 np0005592157 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 07:50:11 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8873] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8876] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8888] device (lo): Activation: successful, device activated.
Jan 22 07:50:11 np0005592157 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 07:50:11 np0005592157 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:50:11 np0005592157 systemd[1]: Reached target NFS client services.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8925] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8928] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:50:11 np0005592157 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8934] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8939] device (eth0): Activation: successful, device activated.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8946] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:50:11 np0005592157 systemd[1]: Reached target Remote File Systems.
Jan 22 07:50:11 np0005592157 NetworkManager[857]: <info>  [1769086211.8953] manager: startup complete
Jan 22 07:50:11 np0005592157 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:50:11 np0005592157 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:50:11 np0005592157 systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 07:50:12 np0005592157 cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 12:50:12 +0000. Up 7.27 seconds.
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.174         | 255.255.255.0 | global | fa:16:3e:f6:cd:9b |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fef6:cd9b/64 |       .       |  link  | fa:16:3e:f6:cd:9b |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 22 07:50:12 np0005592157 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:50:13 np0005592157 cloud-init[921]: Generating public/private rsa key pair.
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key fingerprint is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: SHA256:1PHR4rGTDxT80HdadZfgeKuRYD9gVJosVIovtCYUdWw root@np0005592157.novalocal
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key's randomart image is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: +---[RSA 3072]----+
Jan 22 07:50:13 np0005592157 cloud-init[921]: |   ....o.oooo=. *|
Jan 22 07:50:13 np0005592157 cloud-init[921]: |    . +E+.ooO.o.*|
Jan 22 07:50:13 np0005592157 cloud-init[921]: |   . o.o.O.+.X +.|
Jan 22 07:50:13 np0005592157 cloud-init[921]: |  . . o.+ + O +  |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |   . + .S  = =   |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |    o .     + .  |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |           .     |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |                 |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |                 |
Jan 22 07:50:13 np0005592157 cloud-init[921]: +----[SHA256]-----+
Jan 22 07:50:13 np0005592157 cloud-init[921]: Generating public/private ecdsa key pair.
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key fingerprint is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: SHA256:NlbhZoO7ZP6ei+nP2hclrtHaAwqgbwfm4ySwTS8Q1BY root@np0005592157.novalocal
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key's randomart image is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: +---[ECDSA 256]---+
Jan 22 07:50:13 np0005592157 cloud-init[921]: | ..E.     .      |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |.  o     o .     |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |. .     . *      |
Jan 22 07:50:13 np0005592157 cloud-init[921]: | .  .    = .. .  |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |o .. .  S  o o   |
Jan 22 07:50:13 np0005592157 cloud-init[921]: | *..o .* oo +    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |. ++o. .o. * .   |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |   += . .*o.+    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |   o.o .=+Oo .   |
Jan 22 07:50:13 np0005592157 cloud-init[921]: +----[SHA256]-----+
Jan 22 07:50:13 np0005592157 cloud-init[921]: Generating public/private ed25519 key pair.
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 07:50:13 np0005592157 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key fingerprint is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: SHA256:QAvbnlXKoS0lLX7N6J2MkVFGtsxp5b6WymPlCFukjaM root@np0005592157.novalocal
Jan 22 07:50:13 np0005592157 cloud-init[921]: The key's randomart image is:
Jan 22 07:50:13 np0005592157 cloud-init[921]: +--[ED25519 256]--+
Jan 22 07:50:13 np0005592157 cloud-init[921]: |    . o.oo* .    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |     =.BoO =     |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |    ..*.=** .    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |     ..==.+.     |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |      ooSX ..    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |        B * .o   |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |       . = ++    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |      E ..+o.    |
Jan 22 07:50:13 np0005592157 cloud-init[921]: |         .o.     |
Jan 22 07:50:13 np0005592157 cloud-init[921]: +----[SHA256]-----+
Jan 22 07:50:13 np0005592157 systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 07:50:13 np0005592157 systemd[1]: Reached target Cloud-config availability.
Jan 22 07:50:13 np0005592157 systemd[1]: Reached target Network is Online.
Jan 22 07:50:13 np0005592157 systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 07:50:13 np0005592157 systemd[1]: Starting Crash recovery kernel arming...
Jan 22 07:50:13 np0005592157 systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 07:50:13 np0005592157 systemd[1]: Starting System Logging Service...
Jan 22 07:50:13 np0005592157 systemd[1]: Starting OpenSSH server daemon...
Jan 22 07:50:13 np0005592157 sm-notify[1004]: Version 2.5.4 starting
Jan 22 07:50:13 np0005592157 systemd[1]: Starting Permit User Sessions...
Jan 22 07:50:13 np0005592157 systemd[1]: Started Notify NFS peers of a restart.
Jan 22 07:50:13 np0005592157 systemd[1]: Finished Permit User Sessions.
Jan 22 07:50:13 np0005592157 systemd[1]: Started OpenSSH server daemon.
Jan 22 07:50:13 np0005592157 systemd[1]: Started Command Scheduler.
Jan 22 07:50:13 np0005592157 systemd[1]: Started Getty on tty1.
Jan 22 07:50:13 np0005592157 systemd[1]: Started Serial Getty on ttyS0.
Jan 22 07:50:13 np0005592157 systemd[1]: Reached target Login Prompts.
Jan 22 07:50:13 np0005592157 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 22 07:50:13 np0005592157 systemd[1]: Started System Logging Service.
Jan 22 07:50:13 np0005592157 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 07:50:13 np0005592157 systemd[1]: Reached target Multi-User System.
Jan 22 07:50:13 np0005592157 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 07:50:13 np0005592157 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 07:50:13 np0005592157 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 07:50:13 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 07:50:13 np0005592157 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Jan 22 07:50:13 np0005592157 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 07:50:14 np0005592157 cloud-init[1187]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 12:50:13 +0000. Up 8.98 seconds.
Jan 22 07:50:14 np0005592157 systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 07:50:14 np0005592157 systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 07:50:14 np0005592157 dracut[1265]: dracut-057-102.git20250818.el9
Jan 22 07:50:14 np0005592157 dracut[1267]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 07:50:14 np0005592157 cloud-init[1341]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 12:50:14 +0000. Up 9.45 seconds.
Jan 22 07:50:14 np0005592157 cloud-init[1367]: #############################################################
Jan 22 07:50:14 np0005592157 cloud-init[1369]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 07:50:14 np0005592157 cloud-init[1374]: 256 SHA256:NlbhZoO7ZP6ei+nP2hclrtHaAwqgbwfm4ySwTS8Q1BY root@np0005592157.novalocal (ECDSA)
Jan 22 07:50:14 np0005592157 cloud-init[1380]: 256 SHA256:QAvbnlXKoS0lLX7N6J2MkVFGtsxp5b6WymPlCFukjaM root@np0005592157.novalocal (ED25519)
Jan 22 07:50:14 np0005592157 cloud-init[1384]: 3072 SHA256:1PHR4rGTDxT80HdadZfgeKuRYD9gVJosVIovtCYUdWw root@np0005592157.novalocal (RSA)
Jan 22 07:50:14 np0005592157 cloud-init[1386]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 07:50:14 np0005592157 cloud-init[1388]: #############################################################
Jan 22 07:50:14 np0005592157 cloud-init[1341]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 12:50:14 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.69 seconds
Jan 22 07:50:14 np0005592157 systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 07:50:14 np0005592157 systemd[1]: Reached target Cloud-init target.
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:50:14 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: memstrack is not available
Jan 22 07:50:15 np0005592157 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:50:15 np0005592157 dracut[1267]: memstrack is not available
Jan 22 07:50:15 np0005592157 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:50:15 np0005592157 dracut[1267]: *** Including module: systemd ***
Jan 22 07:50:16 np0005592157 dracut[1267]: *** Including module: fips ***
Jan 22 07:50:16 np0005592157 dracut[1267]: *** Including module: systemd-initrd ***
Jan 22 07:50:16 np0005592157 dracut[1267]: *** Including module: i18n ***
Jan 22 07:50:16 np0005592157 dracut[1267]: *** Including module: drm ***
Jan 22 07:50:17 np0005592157 chronyd[793]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Jan 22 07:50:17 np0005592157 chronyd[793]: System clock TAI offset set to 37 seconds
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: prefixdevname ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: kernel-modules ***
Jan 22 07:50:17 np0005592157 kernel: block vda: the capability attribute has been deprecated.
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: kernel-modules-extra ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: qemu ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: fstab-sys ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: rootfs-block ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: terminfo ***
Jan 22 07:50:17 np0005592157 dracut[1267]: *** Including module: udev-rules ***
Jan 22 07:50:18 np0005592157 dracut[1267]: Skipping udev rule: 91-permissions.rules
Jan 22 07:50:18 np0005592157 dracut[1267]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: virtiofs ***
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: dracut-systemd ***
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: usrmount ***
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: base ***
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: fs-lib ***
Jan 22 07:50:18 np0005592157 dracut[1267]: *** Including module: kdumpbase ***
Jan 22 07:50:19 np0005592157 dracut[1267]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 07:50:19 np0005592157 dracut[1267]:  microcode_ctl module: mangling fw_dir
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 07:50:19 np0005592157 dracut[1267]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 07:50:19 np0005592157 dracut[1267]: *** Including module: openssl ***
Jan 22 07:50:19 np0005592157 dracut[1267]: *** Including module: shutdown ***
Jan 22 07:50:19 np0005592157 dracut[1267]: *** Including module: squash ***
Jan 22 07:50:20 np0005592157 dracut[1267]: *** Including modules done ***
Jan 22 07:50:20 np0005592157 dracut[1267]: *** Installing kernel module dependencies ***
Jan 22 07:50:20 np0005592157 dracut[1267]: *** Installing kernel module dependencies done ***
Jan 22 07:50:20 np0005592157 dracut[1267]: *** Resolving executable dependencies ***
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 25 affinity is now unmanaged
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 31 affinity is now unmanaged
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 28 affinity is now unmanaged
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 32 affinity is now unmanaged
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 30 affinity is now unmanaged
Jan 22 07:50:21 np0005592157 irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 22 07:50:21 np0005592157 irqbalance[783]: IRQ 29 affinity is now unmanaged
Jan 22 07:50:22 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:50:22 np0005592157 dracut[1267]: *** Resolving executable dependencies done ***
Jan 22 07:50:22 np0005592157 dracut[1267]: *** Generating early-microcode cpio image ***
Jan 22 07:50:22 np0005592157 dracut[1267]: *** Store current command line parameters ***
Jan 22 07:50:22 np0005592157 dracut[1267]: Stored kernel commandline:
Jan 22 07:50:22 np0005592157 dracut[1267]: No dracut internal kernel commandline stored in the initramfs
Jan 22 07:50:22 np0005592157 dracut[1267]: *** Install squash loader ***
Jan 22 07:50:23 np0005592157 dracut[1267]: *** Squashing the files inside the initramfs ***
Jan 22 07:50:24 np0005592157 dracut[1267]: *** Squashing the files inside the initramfs done ***
Jan 22 07:50:24 np0005592157 dracut[1267]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 07:50:24 np0005592157 dracut[1267]: *** Hardlinking files ***
Jan 22 07:50:24 np0005592157 dracut[1267]: *** Hardlinking files done ***
Jan 22 07:50:24 np0005592157 dracut[1267]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 07:50:25 np0005592157 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Jan 22 07:50:25 np0005592157 kdumpctl[1014]: kdump: Starting kdump: [OK]
Jan 22 07:50:25 np0005592157 systemd[1]: Finished Crash recovery kernel arming.
Jan 22 07:50:25 np0005592157 systemd[1]: Startup finished in 1.915s (kernel) + 2.356s (initrd) + 16.510s (userspace) = 20.782s.
Jan 22 07:50:30 np0005592157 systemd[1]: Created slice User Slice of UID 1000.
Jan 22 07:50:30 np0005592157 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 07:50:30 np0005592157 systemd-logind[785]: New session 1 of user zuul.
Jan 22 07:50:30 np0005592157 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 07:50:30 np0005592157 systemd[1]: Starting User Manager for UID 1000...
Jan 22 07:50:30 np0005592157 systemd[4305]: Queued start job for default target Main User Target.
Jan 22 07:50:30 np0005592157 systemd[4305]: Created slice User Application Slice.
Jan 22 07:50:30 np0005592157 systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 07:50:30 np0005592157 systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 07:50:30 np0005592157 systemd[4305]: Reached target Paths.
Jan 22 07:50:30 np0005592157 systemd[4305]: Reached target Timers.
Jan 22 07:50:30 np0005592157 systemd[4305]: Starting D-Bus User Message Bus Socket...
Jan 22 07:50:30 np0005592157 systemd[4305]: Starting Create User's Volatile Files and Directories...
Jan 22 07:50:30 np0005592157 systemd[4305]: Finished Create User's Volatile Files and Directories.
Jan 22 07:50:30 np0005592157 systemd[4305]: Listening on D-Bus User Message Bus Socket.
Jan 22 07:50:30 np0005592157 systemd[4305]: Reached target Sockets.
Jan 22 07:50:30 np0005592157 systemd[4305]: Reached target Basic System.
Jan 22 07:50:30 np0005592157 systemd[4305]: Reached target Main User Target.
Jan 22 07:50:30 np0005592157 systemd[4305]: Startup finished in 117ms.
Jan 22 07:50:30 np0005592157 systemd[1]: Started User Manager for UID 1000.
Jan 22 07:50:30 np0005592157 systemd[1]: Started Session 1 of User zuul.
Jan 22 07:50:31 np0005592157 python3[4387]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:33 np0005592157 python3[4415]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:41 np0005592157 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:50:42 np0005592157 python3[4473]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:42 np0005592157 python3[4515]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 07:50:44 np0005592157 python3[4541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1DCoRB3r0Iy6aGg4LRzpWVb+uDCW+ivahM6mnwYTzs7NyJlgPrnZ6PV7GhjThi3qMi3wdL9+LpBaBPuOhI+k1w3f1FS+zKP3/xb59Ck+AhF8LIp3InS3sgWlvIGvXYvlwuN3aBMHp/hbvFOtbZFxgXhvIlVsk+m1K/J/50vtBBzyri7EjoTWDvY18FZoapjDeqss1t7AvCXVAcsVOfZsyssdWALG/AlGcmeZ9kZ/yza1tS0t7avldh0ZazNkLg/5jp3HQrTFLiETLQx8tBjdEj0Pme6UqjG17uVJkEVl4g3FLGiT4krCLRjW0sA3E3rd5e1m4tBIoSSqoqN2E+V9ctp/6T9Vpe3OcZdgKBUE9yz4tlHgQLxksFY2SiXEQYiWTctsRY30EsMJk2Qg65Fyp/ts6u4u66Uo27jNRB+ZD/vnAY4IKu94a2+6uIW/9oShh4f1cWrBlFzxXaUBj4KHar7HFljsOCavs7NCPccp7JoW8FoXONrfM+rhSgDbeDGE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:45 np0005592157 python3[4565]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:45 np0005592157 python3[4664]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:46 np0005592157 python3[4735]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086245.4644897-251-82555824908072/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa follow=False checksum=9eec2026f94d681755d58aa430eaf5c6b319017b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:46 np0005592157 python3[4858]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:47 np0005592157 python3[4929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086246.4487183-306-260023372814059/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa.pub follow=False checksum=f8a39b98331ab3302b65dacd0b8176268aaf7e5b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:49 np0005592157 python3[4977]: ansible-ping Invoked with data=pong
Jan 22 07:50:50 np0005592157 python3[5001]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:52 np0005592157 python3[5059]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 07:50:53 np0005592157 python3[5091]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592157 python3[5115]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592157 python3[5139]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592157 python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592157 python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592157 python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:57 np0005592157 python3[5237]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:58 np0005592157 python3[5315]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:58 np0005592157 python3[5388]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086257.5594864-31-50298948394627/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:59 np0005592157 python3[5436]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592157 python3[5460]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592157 python3[5484]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592157 python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592157 python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592157 python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592157 python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592157 python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592157 python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592157 python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592157 python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592157 python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592157 python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592157 python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592157 python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592157 python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592157 python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592157 python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592157 python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592157 python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592157 python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592157 python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592157 python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592157 python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592157 python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:07 np0005592157 python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:08 np0005592157 python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 07:51:08 np0005592157 systemd[1]: Starting Time & Date Service...
Jan 22 07:51:09 np0005592157 systemd[1]: Started Time & Date Service.
Jan 22 07:51:09 np0005592157 systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Jan 22 07:51:09 np0005592157 python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:09 np0005592157 python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:10 np0005592157 python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769086269.690329-251-195057737311629/source _original_basename=tmpd93v766v follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:10 np0005592157 python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:11 np0005592157 python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086270.7464256-301-9120948024685/source _original_basename=tmpf7ah5yka follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:12 np0005592157 python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:13 np0005592157 python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086272.4698386-381-154060331246462/source _original_basename=tmpcpf2u75g follow=False checksum=4443522d106e75a5e1be95297fe05ddba04454bc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:13 np0005592157 python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592157 python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592157 python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:15 np0005592157 python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086274.3660011-451-130357494908667/source _original_basename=tmpfhba_0_h follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:15 np0005592157 python3[6864]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-37d2-1cc7-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:16 np0005592157 python3[6892]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-37d2-1cc7-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 07:51:18 np0005592157 python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:39 np0005592157 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 07:51:42 np0005592157 python3[6948]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 22 07:52:28 np0005592157 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 22 07:52:28 np0005592157 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6184] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:52:28 np0005592157 systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6454] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6484] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6487] device (eth1): carrier: link connected
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6490] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6495] policy: auto-activating connection 'Wired connection 1' (95752ec9-4165-3466-a14e-bd81c298a1df)
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6499] device (eth1): Activation: starting connection 'Wired connection 1' (95752ec9-4165-3466-a14e-bd81c298a1df)
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6500] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6503] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6507] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:52:28 np0005592157 NetworkManager[857]: <info>  [1769086348.6512] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:52:29 np0005592157 python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-97dc-dff7-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:52:39 np0005592157 python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:52:40 np0005592157 python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086359.2319195-104-109207371914979/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=47dfb0e9c074688406a87336d2fbdd19abc16eca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:52:40 np0005592157 python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 07:52:40 np0005592157 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 07:52:40 np0005592157 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 07:52:40 np0005592157 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9321] caught SIGTERM, shutting down normally.
Jan 22 07:52:40 np0005592157 systemd[1]: Stopping Network Manager...
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9332] dhcp4 (eth0): canceled DHCP transaction
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9333] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9333] dhcp4 (eth0): state changed no lease
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9335] manager: NetworkManager state is now CONNECTING
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9460] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9461] dhcp4 (eth1): state changed no lease
Jan 22 07:52:40 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:52:40 np0005592157 NetworkManager[857]: <info>  [1769086360.9524] exiting (success)
Jan 22 07:52:40 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:52:40 np0005592157 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 07:52:40 np0005592157 systemd[1]: Stopped Network Manager.
Jan 22 07:52:40 np0005592157 systemd[1]: Starting Network Manager...
Jan 22 07:52:40 np0005592157 NetworkManager[7191]: <info>  [1769086360.9982] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:ab3239db-3271-4bdd-a6d4-5ceb67d83a2c)
Jan 22 07:52:40 np0005592157 NetworkManager[7191]: <info>  [1769086360.9984] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0032] manager[0x55c80a5c0000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:52:41 np0005592157 systemd[1]: Starting Hostname Service...
Jan 22 07:52:41 np0005592157 systemd[1]: Started Hostname Service.
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0837] hostname: hostname: using hostnamed
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0838] hostname: static hostname changed from (none) to "np0005592157.novalocal"
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0842] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0846] manager[0x55c80a5c0000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0846] manager[0x55c80a5c0000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0875] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0875] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0876] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0877] manager: Networking is enabled by state file
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0879] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0883] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0915] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0926] dhcp: init: Using DHCP client 'internal'
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0929] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0934] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0942] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0949] device (lo): Activation: starting connection 'lo' (04c4e722-12df-49cb-b7ee-622fbd23b757)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0956] device (eth0): carrier: link connected
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0961] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0966] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0967] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0974] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0982] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0989] device (eth1): carrier: link connected
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0994] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0999] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (95752ec9-4165-3466-a14e-bd81c298a1df) (indicated)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.0999] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1006] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1013] device (eth1): Activation: starting connection 'Wired connection 1' (95752ec9-4165-3466-a14e-bd81c298a1df)
Jan 22 07:52:41 np0005592157 systemd[1]: Started Network Manager.
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1020] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1025] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1027] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1030] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1033] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1047] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1051] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1054] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1059] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1067] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1071] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1090] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1097] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1116] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1124] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1131] device (lo): Activation: successful, device activated.
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1140] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1146] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:52:41 np0005592157 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1601] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1846] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1848] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1853] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1856] device (eth0): Activation: successful, device activated.
Jan 22 07:52:41 np0005592157 NetworkManager[7191]: <info>  [1769086361.1860] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:52:41 np0005592157 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-97dc-dff7-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:52:51 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:53:11 np0005592157 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:53:23 np0005592157 systemd[4305]: Starting Mark boot as successful...
Jan 22 07:53:23 np0005592157 systemd[4305]: Finished Mark boot as successful.
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0007] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:53:26 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:53:26 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0268] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0270] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0275] device (eth1): Activation: successful, device activated.
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0282] manager: startup complete
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0283] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <warn>  [1769086406.0288] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0295] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0360] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0360] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0361] dhcp4 (eth1): state changed no lease
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0373] policy: auto-activating connection 'ci-private-network' (6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6)
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0376] device (eth1): Activation: starting connection 'ci-private-network' (6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6)
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0377] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0379] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0384] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0391] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0430] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0432] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:53:26 np0005592157 NetworkManager[7191]: <info>  [1769086406.0435] device (eth1): Activation: successful, device activated.
Jan 22 07:53:36 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:53:41 np0005592157 systemd-logind[785]: Session 1 logged out. Waiting for processes to exit.
Jan 22 07:54:57 np0005592157 systemd-logind[785]: New session 3 of user zuul.
Jan 22 07:54:57 np0005592157 systemd[1]: Started Session 3 of User zuul.
Jan 22 07:54:58 np0005592157 python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:54:58 np0005592157 python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086497.8051536-373-142701288929189/source _original_basename=tmp13dduul0 follow=False checksum=5e7e0974f47bfd675c68ead6f6109233c4c9d481 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:55:02 np0005592157 systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 07:55:02 np0005592157 systemd-logind[785]: Session 3 logged out. Waiting for processes to exit.
Jan 22 07:55:02 np0005592157 systemd-logind[785]: Removed session 3.
Jan 22 07:56:23 np0005592157 systemd[4305]: Created slice User Background Tasks Slice.
Jan 22 07:56:23 np0005592157 systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 07:56:23 np0005592157 systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:00:10 np0005592157 systemd-logind[785]: New session 4 of user zuul.
Jan 22 08:00:10 np0005592157 systemd[1]: Started Session 4 of User zuul.
Jan 22 08:00:10 np0005592157 python3[7514]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:11 np0005592157 python3[7543]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592157 python3[7569]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592157 python3[7595]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592157 python3[7621]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:13 np0005592157 python3[7647]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:13 np0005592157 python3[7725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:00:13 np0005592157 python3[7798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086813.2191668-362-70935358483983/source _original_basename=tmp3r122mzv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:15 np0005592157 python3[7848]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:00:15 np0005592157 systemd[1]: Reloading.
Jan 22 08:00:15 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:00:16 np0005592157 python3[7904]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 08:00:17 np0005592157 python3[7930]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592157 python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592157 python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592157 python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592157 python3[8041]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca7-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592157 python3[8071]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:00:22 np0005592157 systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 08:00:22 np0005592157 systemd[1]: session-4.scope: Consumed 4.388s CPU time.
Jan 22 08:00:22 np0005592157 systemd-logind[785]: Session 4 logged out. Waiting for processes to exit.
Jan 22 08:00:22 np0005592157 systemd-logind[785]: Removed session 4.
Jan 22 08:00:24 np0005592157 systemd-logind[785]: New session 5 of user zuul.
Jan 22 08:00:24 np0005592157 systemd[1]: Started Session 5 of User zuul.
Jan 22 08:00:25 np0005592157 python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:00:31 np0005592157 irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 22 08:00:31 np0005592157 irqbalance[783]: IRQ 27 affinity is now unmanaged
Jan 22 08:00:32 np0005592157 setsebool[8145]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 08:00:32 np0005592157 setsebool[8145]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 08:00:43 np0005592157 kernel: SELinux:  Converting 383 SID table entries...
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:00:43 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  Converting 386 SID table entries...
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:00:53 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:01:11 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:11 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:01:11 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:01:11 np0005592157 systemd[1]: Reloading.
Jan 22 08:01:11 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:01:12 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:01:25 np0005592157 python3[16566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-af35-cd98-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:01:26 np0005592157 kernel: evm: overlay not supported
Jan 22 08:01:26 np0005592157 systemd[4305]: Starting D-Bus User Message Bus...
Jan 22 08:01:26 np0005592157 dbus-broker-launch[17154]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592157 dbus-broker-launch[17154]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592157 systemd[4305]: Started D-Bus User Message Bus.
Jan 22 08:01:26 np0005592157 dbus-broker-lau[17154]: Ready
Jan 22 08:01:26 np0005592157 systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:26 np0005592157 systemd[4305]: Created slice Slice /user.
Jan 22 08:01:26 np0005592157 systemd[4305]: podman-17085.scope: unit configures an IP firewall, but not running as root.
Jan 22 08:01:26 np0005592157 systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 08:01:26 np0005592157 systemd[4305]: Started podman-17085.scope.
Jan 22 08:01:26 np0005592157 systemd[4305]: Started podman-pause-d06c22bb.scope.
Jan 22 08:01:27 np0005592157 python3[17696]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:01:27 np0005592157 python3[17696]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 22 08:01:28 np0005592157 systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 08:01:28 np0005592157 systemd[1]: session-5.scope: Consumed 43.535s CPU time.
Jan 22 08:01:28 np0005592157 systemd-logind[785]: Session 5 logged out. Waiting for processes to exit.
Jan 22 08:01:28 np0005592157 systemd-logind[785]: Removed session 5.
Jan 22 08:01:54 np0005592157 systemd-logind[785]: New session 6 of user zuul.
Jan 22 08:01:54 np0005592157 systemd[1]: Started Session 6 of User zuul.
Jan 22 08:01:54 np0005592157 python3[27128]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:54 np0005592157 python3[27204]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:55 np0005592157 python3[27498]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005592157.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 08:02:00 np0005592157 python3[27881]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:02:00 np0005592157 python3[28059]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:02:01 np0005592157 python3[28191]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086920.340846-167-240056380951793/source _original_basename=tmps3qvuxby follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:02:02 np0005592157 python3[28383]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 22 08:02:02 np0005592157 systemd[1]: Starting Hostname Service...
Jan 22 08:02:02 np0005592157 systemd[1]: Started Hostname Service.
Jan 22 08:02:02 np0005592157 systemd-hostnamed[28432]: Changed pretty hostname to 'compute-0'
Jan 22 08:02:02 np0005592157 systemd-hostnamed[28432]: Hostname set to <compute-0> (static)
Jan 22 08:02:02 np0005592157 NetworkManager[7191]: <info>  [1769086922.2858] hostname: static hostname changed from "np0005592157.novalocal" to "compute-0"
Jan 22 08:02:02 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:02:02 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:02:02 np0005592157 systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 08:02:02 np0005592157 systemd[1]: session-6.scope: Consumed 2.400s CPU time.
Jan 22 08:02:02 np0005592157 systemd-logind[785]: Session 6 logged out. Waiting for processes to exit.
Jan 22 08:02:02 np0005592157 systemd-logind[785]: Removed session 6.
Jan 22 08:02:09 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:02:09 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:02:09 np0005592157 systemd[1]: man-db-cache-update.service: Consumed 56.858s CPU time.
Jan 22 08:02:09 np0005592157 systemd[1]: run-rffefda141ec045bf82bba5a32eedc018.service: Deactivated successfully.
Jan 22 08:02:12 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:02:32 np0005592157 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:05:23 np0005592157 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 08:05:23 np0005592157 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 08:05:23 np0005592157 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 08:05:23 np0005592157 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 08:07:01 np0005592157 systemd-logind[785]: New session 7 of user zuul.
Jan 22 08:07:01 np0005592157 systemd[1]: Started Session 7 of User zuul.
Jan 22 08:07:01 np0005592157 python3[30032]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:07:03 np0005592157 python3[30148]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:03 np0005592157 python3[30221]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:04 np0005592157 python3[30247]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:04 np0005592157 python3[30320]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:05 np0005592157 python3[30346]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:05 np0005592157 python3[30419]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:05 np0005592157 python3[30445]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:06 np0005592157 python3[30518]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:06 np0005592157 python3[30544]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:06 np0005592157 python3[30617]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:06 np0005592157 python3[30643]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:07 np0005592157 python3[30716]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:07 np0005592157 python3[30742]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:07 np0005592157 python3[30815]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.115548-34123-65359383718532/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:19 np0005592157 python3[30874]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:12:19 np0005592157 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 08:12:19 np0005592157 systemd[1]: session-7.scope: Consumed 4.857s CPU time.
Jan 22 08:12:19 np0005592157 systemd-logind[785]: Session 7 logged out. Waiting for processes to exit.
Jan 22 08:12:19 np0005592157 systemd-logind[785]: Removed session 7.
Jan 22 08:21:58 np0005592157 systemd-logind[785]: New session 8 of user zuul.
Jan 22 08:21:58 np0005592157 systemd[1]: Started Session 8 of User zuul.
Jan 22 08:21:59 np0005592157 python3.9[31052]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:00 np0005592157 python3.9[31233]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:11 np0005592157 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 08:22:11 np0005592157 systemd[1]: session-8.scope: Consumed 8.051s CPU time.
Jan 22 08:22:11 np0005592157 systemd-logind[785]: Session 8 logged out. Waiting for processes to exit.
Jan 22 08:22:11 np0005592157 systemd-logind[785]: Removed session 8.
Jan 22 08:22:26 np0005592157 systemd-logind[785]: New session 9 of user zuul.
Jan 22 08:22:26 np0005592157 systemd[1]: Started Session 9 of User zuul.
Jan 22 08:22:27 np0005592157 python3.9[31444]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:22:29 np0005592157 python3.9[31618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:30 np0005592157 python3.9[31770]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:31 np0005592157 python3.9[31923]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:22:32 np0005592157 python3.9[32075]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:32 np0005592157 python3.9[32227]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:22:33 np0005592157 python3.9[32350]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088152.4325032-177-26792856672881/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:34 np0005592157 python3.9[32502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:35 np0005592157 python3.9[32658]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:36 np0005592157 python3.9[32810]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:37 np0005592157 python3.9[32960]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:22:42 np0005592157 python3.9[33213]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:43 np0005592157 python3.9[33363]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:44 np0005592157 python3.9[33517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:45 np0005592157 python3.9[33675]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:22:46 np0005592157 python3.9[33759]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:23:33 np0005592157 systemd[1]: Reloading.
Jan 22 08:23:33 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:33 np0005592157 systemd[1]: Starting dnf makecache...
Jan 22 08:23:33 np0005592157 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 08:23:34 np0005592157 dnf[33969]: Failed determining last makecache time.
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-barbican-42b4c41831408a8e323 153 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 163 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-cinder-1c00d6490d88e436f26ef 191 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-stevedore-c4acc5639fd2329372142 209 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-cloudkitty-tests-tempest-2c80f8 162 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-os-refresh-config-9bfc52b5049be2d8de61 156 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 179 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-designate-tests-tempest-347fdbc 172 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-glance-1fd12c29b339f30fe823e 178 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 198 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-manila-3c01b7181572c95dac462 170 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-whitebox-neutron-tests-tempest- 198 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-octavia-ba397f07a7331190208c 196 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-watcher-c014f81a8647287f6dcc 163 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-ansible-config_template-5ccaa22121a7ff 174 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 177 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-swift-dc98a8463506ac520c469a 182 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 systemd[1]: Reloading.
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-python-tempestconf-8515371b7cceebd4282 177 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 dnf[33969]: delorean-openstack-heat-ui-013accbfd179753bc3f0 120 kB/s | 3.0 kB     00:00
Jan 22 08:23:34 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:34 np0005592157 dnf[33969]: CentOS Stream 9 - BaseOS                         63 kB/s | 6.7 kB     00:00
Jan 22 08:23:34 np0005592157 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 08:23:34 np0005592157 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 08:23:34 np0005592157 dnf[33969]: CentOS Stream 9 - AppStream                      67 kB/s | 6.8 kB     00:00
Jan 22 08:23:34 np0005592157 systemd[1]: Reloading.
Jan 22 08:23:34 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:34 np0005592157 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 08:23:35 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:23:35 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:23:35 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:23:35 np0005592157 dnf[33969]: CentOS Stream 9 - CRB                            18 kB/s | 6.6 kB     00:00
Jan 22 08:23:35 np0005592157 dnf[33969]: CentOS Stream 9 - Extras packages                22 kB/s | 7.3 kB     00:00
Jan 22 08:23:36 np0005592157 dnf[33969]: dlrn-antelope-testing                           2.8 kB/s | 3.0 kB     00:01
Jan 22 08:23:36 np0005592157 dnf[33969]: dlrn-antelope-build-deps                        116 kB/s | 3.0 kB     00:00
Jan 22 08:23:36 np0005592157 dnf[33969]: centos9-rabbitmq                                131 kB/s | 3.0 kB     00:00
Jan 22 08:23:36 np0005592157 dnf[33969]: centos9-storage                                 128 kB/s | 3.0 kB     00:00
Jan 22 08:23:36 np0005592157 dnf[33969]: centos9-opstools                                139 kB/s | 3.0 kB     00:00
Jan 22 08:23:36 np0005592157 dnf[33969]: NFV SIG OpenvSwitch                              45 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592157 dnf[33969]: repo-setup-centos-appstream                     3.5 kB/s | 4.4 kB     00:01
Jan 22 08:23:38 np0005592157 dnf[33969]: repo-setup-centos-baseos                        164 kB/s | 3.9 kB     00:00
Jan 22 08:23:38 np0005592157 dnf[33969]: repo-setup-centos-highavailability              115 kB/s | 3.9 kB     00:00
Jan 22 08:23:38 np0005592157 dnf[33969]: repo-setup-centos-powertools                    141 kB/s | 4.3 kB     00:00
Jan 22 08:23:38 np0005592157 dnf[33969]: Extra Packages for Enterprise Linux 9 - x86_64  184 kB/s |  25 kB     00:00
Jan 22 08:23:39 np0005592157 dnf[33969]: Metadata cache created.
Jan 22 08:23:39 np0005592157 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 08:23:39 np0005592157 systemd[1]: Finished dnf makecache.
Jan 22 08:23:39 np0005592157 systemd[1]: dnf-makecache.service: Consumed 1.805s CPU time.
Jan 22 08:24:45 np0005592157 kernel: SELinux:  Converting 2723 SID table entries...
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:24:45 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:24:45 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 08:24:46 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:24:46 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:24:46 np0005592157 systemd[1]: Reloading.
Jan 22 08:24:46 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:24:46 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:24:47 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:24:47 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:24:47 np0005592157 systemd[1]: man-db-cache-update.service: Consumed 1.534s CPU time.
Jan 22 08:24:47 np0005592157 systemd[1]: run-re7b8e43dab244d14aec51036e3d5bc4d.service: Deactivated successfully.
Jan 22 08:24:57 np0005592157 python3.9[35364]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:24:59 np0005592157 python3.9[35645]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:25:00 np0005592157 python3.9[35797]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:25:05 np0005592157 python3.9[35951]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:09 np0005592157 python3.9[36103]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:25:11 np0005592157 python3.9[36255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:12 np0005592157 python3.9[36407]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:13 np0005592157 python3.9[36530]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088311.9671688-666-58578297888601/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:14 np0005592157 python3.9[36682]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:15 np0005592157 python3.9[36834]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:16 np0005592157 python3.9[36987]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:17 np0005592157 python3.9[37139]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:25:17 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:25:18 np0005592157 python3.9[37293]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:20 np0005592157 python3.9[37451]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:25:21 np0005592157 python3.9[37611]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:25:21 np0005592157 python3.9[37764]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:22 np0005592157 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 22 08:25:22 np0005592157 irqbalance[783]: IRQ 26 affinity is now unmanaged
Jan 22 08:25:22 np0005592157 python3.9[37922]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:25:23 np0005592157 python3.9[38074]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:29 np0005592157 python3.9[38228]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:30 np0005592157 python3.9[38380]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:30 np0005592157 python3.9[38503]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088329.6744606-1023-107817871319120/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:32 np0005592157 python3.9[38655]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:32 np0005592157 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:25:32 np0005592157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 08:25:32 np0005592157 kernel: Bridge firewalling registered
Jan 22 08:25:32 np0005592157 systemd-modules-load[38659]: Inserted module 'br_netfilter'
Jan 22 08:25:32 np0005592157 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:25:33 np0005592157 python3.9[38814]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:33 np0005592157 python3.9[38937]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088332.4888391-1092-205552725318003/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:34 np0005592157 python3.9[39089]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:38 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:25:38 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:25:38 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:25:38 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:25:38 np0005592157 systemd[1]: Reloading.
Jan 22 08:25:38 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:38 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:25:40 np0005592157 python3.9[40966]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:41 np0005592157 python3.9[41771]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:25:42 np0005592157 python3.9[42514]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:43 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:25:43 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:25:43 np0005592157 systemd[1]: man-db-cache-update.service: Consumed 5.675s CPU time.
Jan 22 08:25:43 np0005592157 systemd[1]: run-r49d3370b7fc446ed9559ef6508e8b4b7.service: Deactivated successfully.
Jan 22 08:25:43 np0005592157 python3.9[43258]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:43 np0005592157 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:44 np0005592157 systemd[1]: Starting Authorization Manager...
Jan 22 08:25:44 np0005592157 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:44 np0005592157 polkitd[43475]: Started polkitd version 0.117
Jan 22 08:25:44 np0005592157 systemd[1]: Started Authorization Manager.
Jan 22 08:25:45 np0005592157 python3.9[43645]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:45 np0005592157 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592157 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:25:45 np0005592157 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:25:45 np0005592157 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592157 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:46 np0005592157 python3.9[43806]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:25:50 np0005592157 python3.9[43958]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:50 np0005592157 systemd[1]: Reloading.
Jan 22 08:25:50 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:51 np0005592157 python3.9[44148]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:51 np0005592157 systemd[1]: Reloading.
Jan 22 08:25:51 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:53 np0005592157 python3.9[44337]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:53 np0005592157 python3.9[44490]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:53 np0005592157 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 08:25:54 np0005592157 python3.9[44643]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:57 np0005592157 python3.9[44805]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:57 np0005592157 python3.9[44958]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:57 np0005592157 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 08:25:57 np0005592157 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 08:25:57 np0005592157 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 08:25:57 np0005592157 systemd[1]: Starting Apply Kernel Variables...
Jan 22 08:25:57 np0005592157 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 08:25:57 np0005592157 systemd[1]: Finished Apply Kernel Variables.
Jan 22 08:25:58 np0005592157 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 08:25:58 np0005592157 systemd[1]: session-9.scope: Consumed 2min 26.108s CPU time.
Jan 22 08:25:58 np0005592157 systemd-logind[785]: Session 9 logged out. Waiting for processes to exit.
Jan 22 08:25:58 np0005592157 systemd-logind[785]: Removed session 9.
Jan 22 08:26:03 np0005592157 systemd-logind[785]: New session 10 of user zuul.
Jan 22 08:26:03 np0005592157 systemd[1]: Started Session 10 of User zuul.
Jan 22 08:26:04 np0005592157 python3.9[45141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:06 np0005592157 python3.9[45297]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:26:07 np0005592157 python3.9[45450]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:26:08 np0005592157 python3.9[45608]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:26:09 np0005592157 python3.9[45768]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:26:10 np0005592157 python3.9[45852]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:26:14 np0005592157 python3.9[46016]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:26 np0005592157 kernel: SELinux:  Converting 2736 SID table entries...
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:26 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:26 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 08:26:26 np0005592157 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 08:26:27 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:27 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:28 np0005592157 systemd[1]: Reloading.
Jan 22 08:26:28 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:28 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:28 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:29 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:29 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:29 np0005592157 systemd[1]: run-r152fd3e4b4b2441c98405751ea778d6f.service: Deactivated successfully.
Jan 22 08:26:33 np0005592157 python3.9[47122]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:26:34 np0005592157 systemd[1]: Reloading.
Jan 22 08:26:34 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:34 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:35 np0005592157 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 08:26:35 np0005592157 chown[47164]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 08:26:35 np0005592157 ovs-ctl[47169]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 08:26:35 np0005592157 ovs-ctl[47169]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 08:26:35 np0005592157 ovs-ctl[47169]: Starting ovsdb-server [  OK  ]
Jan 22 08:26:35 np0005592157 ovs-vsctl[47218]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 08:26:35 np0005592157 ovs-vsctl[47234]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7335e41f-b1b8-4c04-9c19-8788162d5bb4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 08:26:35 np0005592157 ovs-ctl[47169]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 08:26:35 np0005592157 ovs-vsctl[47244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 22 08:26:35 np0005592157 ovs-ctl[47169]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:35 np0005592157 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 08:26:35 np0005592157 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 08:26:35 np0005592157 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 08:26:35 np0005592157 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 08:26:35 np0005592157 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 08:26:35 np0005592157 ovs-ctl[47288]: Inserting openvswitch module [  OK  ]
Jan 22 08:26:35 np0005592157 ovs-ctl[47257]: Starting ovs-vswitchd [  OK  ]
Jan 22 08:26:35 np0005592157 ovs-vsctl[47305]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 22 08:26:35 np0005592157 ovs-ctl[47257]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:35 np0005592157 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 08:26:35 np0005592157 systemd[1]: Starting Open vSwitch...
Jan 22 08:26:35 np0005592157 systemd[1]: Finished Open vSwitch.
Jan 22 08:26:36 np0005592157 python3.9[47457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:38 np0005592157 python3.9[47609]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:26:39 np0005592157 kernel: SELinux:  Converting 2750 SID table entries...
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:39 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:40 np0005592157 python3.9[47764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:41 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 08:26:41 np0005592157 python3.9[47922]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:44 np0005592157 python3.9[48075]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:26:45 np0005592157 python3.9[48362]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:26:46 np0005592157 python3.9[48512]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:26:47 np0005592157 python3.9[48666]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:49 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:49 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:49 np0005592157 systemd[1]: Reloading.
Jan 22 08:26:49 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:49 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:49 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:50 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:50 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:50 np0005592157 systemd[1]: run-r24118467a8604c1489b12e3d2b566498.service: Deactivated successfully.
Jan 22 08:26:52 np0005592157 python3.9[48983]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:26:52 np0005592157 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 08:26:52 np0005592157 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 08:26:52 np0005592157 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 08:26:52 np0005592157 systemd[1]: Stopping Network Manager...
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.6963] caught SIGTERM, shutting down normally.
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.6991] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.6991] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.6991] dhcp4 (eth0): state changed no lease
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.7000] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:52 np0005592157 NetworkManager[7191]: <info>  [1769088412.7103] exiting (success)
Jan 22 08:26:52 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:26:52 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:26:52 np0005592157 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 08:26:52 np0005592157 systemd[1]: Stopped Network Manager.
Jan 22 08:26:52 np0005592157 systemd[1]: NetworkManager.service: Consumed 13.306s CPU time, 4.3M memory peak, read 0B from disk, written 41.5K to disk.
Jan 22 08:26:52 np0005592157 systemd[1]: Starting Network Manager...
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.7914] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:ab3239db-3271-4bdd-a6d4-5ceb67d83a2c)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.7917] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.7977] manager[0x55ad8f33b000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 08:26:52 np0005592157 systemd[1]: Starting Hostname Service...
Jan 22 08:26:52 np0005592157 systemd[1]: Started Hostname Service.
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8793] hostname: hostname: using hostnamed
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8794] hostname: static hostname changed from (none) to "compute-0"
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8797] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8802] manager[0x55ad8f33b000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8802] manager[0x55ad8f33b000]: rfkill: WWAN hardware radio set enabled
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8822] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8831] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8831] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8832] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8832] manager: Networking is enabled by state file
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8834] settings: Loaded settings plugin: keyfile (internal)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8838] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8865] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8875] dhcp: init: Using DHCP client 'internal'
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8877] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8883] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8888] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8895] device (lo): Activation: starting connection 'lo' (04c4e722-12df-49cb-b7ee-622fbd23b757)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8902] device (eth0): carrier: link connected
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8905] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8909] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8911] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8917] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8923] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8928] device (eth1): carrier: link connected
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8932] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8936] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6) (indicated)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8936] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8942] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8948] device (eth1): Activation: starting connection 'ci-private-network' (6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8955] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 08:26:52 np0005592157 systemd[1]: Started Network Manager.
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8962] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8964] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8966] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8968] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8970] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8974] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8976] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8982] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.8984] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9016] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9046] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9063] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9077] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 08:26:52 np0005592157 systemd[1]: Starting Network Manager Wait Online...
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9170] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9175] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9185] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9195] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9203] device (lo): Activation: successful, device activated.
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9217] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9222] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9228] device (eth1): Activation: successful, device activated.
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9266] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9269] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9276] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9280] device (eth0): Activation: successful, device activated.
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9289] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 08:26:52 np0005592157 NetworkManager[48997]: <info>  [1769088412.9294] manager: startup complete
Jan 22 08:26:52 np0005592157 systemd[1]: Finished Network Manager Wait Online.
Jan 22 08:26:54 np0005592157 python3.9[49209]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:03 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:04 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:27:04 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:27:04 np0005592157 systemd[1]: Reloading.
Jan 22 08:27:04 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:27:04 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:27:04 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:27:05 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:27:05 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:27:05 np0005592157 systemd[1]: run-r9e0a4ae206354aca8a9ec9b0a0e9d437.service: Deactivated successfully.
Jan 22 08:27:08 np0005592157 python3.9[49667]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:27:09 np0005592157 python3.9[49819]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:10 np0005592157 python3.9[49973]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:11 np0005592157 python3.9[50125]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592157 python3.9[50277]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592157 python3.9[50429]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:13 np0005592157 python3.9[50581]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:14 np0005592157 python3.9[50704]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088433.2091784-647-179384200498647/.source _original_basename=.6_01dh6d follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:15 np0005592157 python3.9[50856]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:16 np0005592157 python3.9[51008]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 08:27:17 np0005592157 python3.9[51160]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:20 np0005592157 python3.9[51587]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 08:27:21 np0005592157 ansible-async_wrapper.py[51762]: Invoked with j748406228224 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5783367-845-125671960538499/AnsiballZ_edpm_os_net_config.py _
Jan 22 08:27:21 np0005592157 ansible-async_wrapper.py[51765]: Starting module and watcher
Jan 22 08:27:21 np0005592157 ansible-async_wrapper.py[51765]: Start watching 51766 (300)
Jan 22 08:27:21 np0005592157 ansible-async_wrapper.py[51766]: Start module (51766)
Jan 22 08:27:21 np0005592157 ansible-async_wrapper.py[51762]: Return async_wrapper task started.
Jan 22 08:27:22 np0005592157 python3.9[51767]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 08:27:22 np0005592157 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 08:27:22 np0005592157 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 08:27:22 np0005592157 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 08:27:22 np0005592157 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 08:27:22 np0005592157 kernel: cfg80211: failed to load regulatory.db
Jan 22 08:27:22 np0005592157 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:27:23 np0005592157 NetworkManager[48997]: <info>  [1769088443.9681] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51768 uid=0 result="success"
Jan 22 08:27:23 np0005592157 NetworkManager[48997]: <info>  [1769088443.9696] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0236] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0238] audit: op="connection-add" uuid="7cd0d61e-b80d-46c1-9557-deb29198a2e7" name="br-ex-br" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0254] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0255] audit: op="connection-add" uuid="16982885-61d9-46a5-afe7-b29d0666e119" name="br-ex-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0270] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0271] audit: op="connection-add" uuid="2dc85696-628a-4624-935a-a5c11de088d8" name="eth1-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0285] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0287] audit: op="connection-add" uuid="8a23045b-bbb1-434d-8a3d-65bc8e907ccd" name="vlan20-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0300] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0302] audit: op="connection-add" uuid="c4d32897-92f6-45b5-a3ac-5edda3e2df43" name="vlan21-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0316] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0319] audit: op="connection-add" uuid="6fb0ac62-2cf1-44bf-8422-fb1ece563a0b" name="vlan22-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0333] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0334] audit: op="connection-add" uuid="d0a25fd9-2b30-43b2-b89b-bbf54dfa22fb" name="vlan23-port" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0356] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0373] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0375] audit: op="connection-add" uuid="502d2865-e27c-45d1-88b8-583cec0d310f" name="br-ex-if" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0429] audit: op="connection-update" uuid="6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6" name="ci-private-network" args="connection.timestamp,connection.port-type,connection.controller,connection.slave-type,connection.master,ovs-external-ids.data,ovs-interface.type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv4.routes,ipv4.addresses,ipv4.never-default,ipv4.dns,ipv4.routing-rules,ipv4.method" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0448] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0451] audit: op="connection-add" uuid="a59809b8-be64-4502-b5ce-ee18e0a98640" name="vlan20-if" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0469] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0471] audit: op="connection-add" uuid="eaf551a0-7f77-42b2-94d6-d7ea5056f5da" name="vlan21-if" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0490] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0491] audit: op="connection-add" uuid="152a1c68-e6a6-42f4-a4c1-20bc9afa7faf" name="vlan22-if" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0511] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0514] audit: op="connection-add" uuid="9b4f64c6-0584-4ab2-87a8-fdab2271de9e" name="vlan23-if" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0525] audit: op="connection-delete" uuid="95752ec9-4165-3466-a14e-bd81c298a1df" name="Wired connection 1" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0539] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0542] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0550] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0556] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7cd0d61e-b80d-46c1-9557-deb29198a2e7)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0558] audit: op="connection-activate" uuid="7cd0d61e-b80d-46c1-9557-deb29198a2e7" name="br-ex-br" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0561] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0562] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0569] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0574] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (16982885-61d9-46a5-afe7-b29d0666e119)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0577] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0579] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0585] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0590] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2dc85696-628a-4624-935a-a5c11de088d8)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0593] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0595] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0601] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0606] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8a23045b-bbb1-434d-8a3d-65bc8e907ccd)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0610] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0612] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0619] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0624] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c4d32897-92f6-45b5-a3ac-5edda3e2df43)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0627] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0628] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0636] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0644] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6fb0ac62-2cf1-44bf-8422-fb1ece563a0b)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0647] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0648] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0656] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0663] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d0a25fd9-2b30-43b2-b89b-bbf54dfa22fb)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0665] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0668] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0671] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0677] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0680] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0683] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0690] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (502d2865-e27c-45d1-88b8-583cec0d310f)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0691] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0696] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0699] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0700] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0702] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0717] device (eth1): disconnecting for new activation request.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0718] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0720] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0722] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0723] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0726] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0727] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0730] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0734] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (a59809b8-be64-4502-b5ce-ee18e0a98640)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0735] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0738] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0739] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0740] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0743] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0744] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0747] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0754] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (eaf551a0-7f77-42b2-94d6-d7ea5056f5da)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0754] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0758] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0760] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0761] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0764] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0765] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0769] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0774] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (152a1c68-e6a6-42f4-a4c1-20bc9afa7faf)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0774] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0778] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0780] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0781] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0784] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <warn>  [1769088444.0785] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0789] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0795] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (9b4f64c6-0584-4ab2-87a8-fdab2271de9e)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0795] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0798] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0800] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0802] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0803] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0820] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0823] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0826] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0828] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0834] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0838] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0842] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0845] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0847] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0852] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0855] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0858] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0860] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0864] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0868] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0871] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0872] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:27:24 np0005592157 kernel: ovs-system: entered promiscuous mode
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0876] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0881] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0884] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0886] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0891] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0895] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0896] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0896] dhcp4 (eth0): state changed no lease
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0897] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0906] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 kernel: Timeout policy base is empty
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0909] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51768 uid=0 result="fail" reason="Device is not activated"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0914] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 systemd-udevd[51774]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0945] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0951] device (eth1): disconnecting for new activation request.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0952] audit: op="connection-activate" uuid="6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6" name="ci-private-network" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0953] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0956] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.0959] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1005] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51768 uid=0 result="success"
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1008] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1136] device (eth1): Activation: starting connection 'ci-private-network' (6b1da7b9-b8ce-5952-ba35-d58c8d2f17f6)
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1140] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1148] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1151] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1156] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1159] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1163] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1163] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1164] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1165] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1165] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1166] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1169] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1176] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1180] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1183] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1186] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1189] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1192] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1195] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1198] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1202] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1205] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1209] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1212] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1217] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1222] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 kernel: br-ex: entered promiscuous mode
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1307] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1310] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1315] device (eth1): Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 kernel: vlan22: entered promiscuous mode
Jan 22 08:27:24 np0005592157 systemd-udevd[51772]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592157 kernel: vlan21: entered promiscuous mode
Jan 22 08:27:24 np0005592157 systemd-udevd[51773]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1478] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1488] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1496] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1507] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1540] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1543] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1547] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 kernel: vlan20: entered promiscuous mode
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1584] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1588] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1593] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1597] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1613] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 kernel: vlan23: entered promiscuous mode
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1651] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1658] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1661] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1703] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1711] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1736] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1738] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1742] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1807] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1820] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1839] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1841] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592157 NetworkManager[48997]: <info>  [1769088444.1846] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592157 NetworkManager[48997]: <info>  [1769088445.2969] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51768 uid=0 result="success"
Jan 22 08:27:25 np0005592157 NetworkManager[48997]: <info>  [1769088445.4896] checkpoint[0x55ad8f311950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 08:27:25 np0005592157 NetworkManager[48997]: <info>  [1769088445.4899] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51768 uid=0 result="success"
Jan 22 08:27:25 np0005592157 python3.9[52127]: ansible-ansible.legacy.async_status Invoked with jid=j748406228224.51762 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:25 np0005592157 NetworkManager[48997]: <info>  [1769088445.8022] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51768 uid=0 result="success"
Jan 22 08:27:25 np0005592157 NetworkManager[48997]: <info>  [1769088445.8037] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51768 uid=0 result="success"
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.0435] audit: op="networking-control" arg="global-dns-configuration" pid=51768 uid=0 result="success"
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.0472] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.0526] audit: op="networking-control" arg="global-dns-configuration" pid=51768 uid=0 result="success"
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.0556] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51768 uid=0 result="success"
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.2907] checkpoint[0x55ad8f311a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 08:27:26 np0005592157 NetworkManager[48997]: <info>  [1769088446.2913] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51768 uid=0 result="success"
Jan 22 08:27:26 np0005592157 ansible-async_wrapper.py[51766]: Module complete (51766)
Jan 22 08:27:26 np0005592157 ansible-async_wrapper.py[51765]: Done in kid B.
Jan 22 08:27:29 np0005592157 python3.9[52233]: ansible-ansible.legacy.async_status Invoked with jid=j748406228224.51762 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:29 np0005592157 python3.9[52333]: ansible-ansible.legacy.async_status Invoked with jid=j748406228224.51762 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 08:27:30 np0005592157 python3.9[52485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:31 np0005592157 python3.9[52608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088450.1855907-926-256281457601685/.source.returncode _original_basename=.p7fpfvsj follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:32 np0005592157 python3.9[52760]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:32 np0005592157 python3.9[52884]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088451.8597898-974-271593831492240/.source.cfg _original_basename=.y5g2rkud follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:34 np0005592157 python3.9[53036]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:27:34 np0005592157 systemd[1]: Reloading Network Manager...
Jan 22 08:27:34 np0005592157 NetworkManager[48997]: <info>  [1769088454.1198] audit: op="reload" arg="0" pid=53040 uid=0 result="success"
Jan 22 08:27:34 np0005592157 NetworkManager[48997]: <info>  [1769088454.1213] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 08:27:34 np0005592157 systemd[1]: Reloaded Network Manager.
Jan 22 08:27:35 np0005592157 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 08:27:35 np0005592157 systemd[1]: session-10.scope: Consumed 53.250s CPU time.
Jan 22 08:27:35 np0005592157 systemd-logind[785]: Session 10 logged out. Waiting for processes to exit.
Jan 22 08:27:35 np0005592157 systemd-logind[785]: Removed session 10.
Jan 22 08:27:40 np0005592157 systemd-logind[785]: New session 11 of user zuul.
Jan 22 08:27:40 np0005592157 systemd[1]: Started Session 11 of User zuul.
Jan 22 08:27:41 np0005592157 python3.9[53225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:42 np0005592157 python3.9[53380]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:44 np0005592157 python3.9[53573]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:27:44 np0005592157 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:44 np0005592157 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 08:27:44 np0005592157 systemd[1]: session-11.scope: Consumed 2.581s CPU time.
Jan 22 08:27:44 np0005592157 systemd-logind[785]: Session 11 logged out. Waiting for processes to exit.
Jan 22 08:27:44 np0005592157 systemd-logind[785]: Removed session 11.
Jan 22 08:27:50 np0005592157 systemd-logind[785]: New session 12 of user zuul.
Jan 22 08:27:50 np0005592157 systemd[1]: Started Session 12 of User zuul.
Jan 22 08:27:51 np0005592157 python3.9[53755]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:52 np0005592157 python3.9[53910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:53 np0005592157 python3.9[54066]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:54 np0005592157 python3.9[54150]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:57 np0005592157 python3.9[54304]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:59 np0005592157 python3.9[54499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:00 np0005592157 python3.9[54651]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:00 np0005592157 podman[54652]: 2026-01-22 13:28:00.34481145 +0000 UTC m=+0.104681191 system refresh
Jan 22 08:28:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:28:01 np0005592157 python3.9[54815]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:02 np0005592157 python3.9[54938]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088480.8962405-196-135401388358578/.source.json follow=False _original_basename=podman_network_config.j2 checksum=69bb59f9609506381c5d6013aa930bf031c424e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:03 np0005592157 python3.9[55090]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:03 np0005592157 python3.9[55213]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088482.6484606-241-155448412137197/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a3e69bacb50e2daad69ea0ffc6501536059b061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:04 np0005592157 python3.9[55365]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:05 np0005592157 python3.9[55517]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592157 python3.9[55669]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592157 python3.9[55821]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:07 np0005592157 python3.9[55973]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:10 np0005592157 python3.9[56126]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:11 np0005592157 python3.9[56280]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:11 np0005592157 python3.9[56432]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:12 np0005592157 python3.9[56584]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:13 np0005592157 python3.9[56737]: ansible-service_facts Invoked
Jan 22 08:28:13 np0005592157 network[56754]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:28:13 np0005592157 network[56755]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:28:13 np0005592157 network[56756]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:28:20 np0005592157 python3.9[57208]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:23 np0005592157 python3.9[57361]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:28:25 np0005592157 python3.9[57513]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:26 np0005592157 python3.9[57638]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088504.8776433-673-136387328151418/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:26 np0005592157 python3.9[57792]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:27 np0005592157 python3.9[57917]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088506.3777-718-198576141426637/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:29 np0005592157 python3.9[58071]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:31 np0005592157 python3.9[58225]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:32 np0005592157 python3.9[58309]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:28:34 np0005592157 python3.9[58463]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:34 np0005592157 python3.9[58547]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:28:34 np0005592157 systemd[1]: Stopping NTP client/server...
Jan 22 08:28:34 np0005592157 chronyd[793]: chronyd exiting
Jan 22 08:28:34 np0005592157 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 08:28:34 np0005592157 systemd[1]: Stopped NTP client/server.
Jan 22 08:28:34 np0005592157 systemd[1]: Starting NTP client/server...
Jan 22 08:28:34 np0005592157 chronyd[58555]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 08:28:34 np0005592157 chronyd[58555]: Frequency -24.654 +/- 0.118 ppm read from /var/lib/chrony/drift
Jan 22 08:28:34 np0005592157 chronyd[58555]: Loaded seccomp filter (level 2)
Jan 22 08:28:34 np0005592157 systemd[1]: Started NTP client/server.
Jan 22 08:28:35 np0005592157 systemd-logind[785]: Session 12 logged out. Waiting for processes to exit.
Jan 22 08:28:35 np0005592157 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 08:28:35 np0005592157 systemd[1]: session-12.scope: Consumed 26.921s CPU time.
Jan 22 08:28:35 np0005592157 systemd-logind[785]: Removed session 12.
Jan 22 08:28:41 np0005592157 systemd-logind[785]: New session 13 of user zuul.
Jan 22 08:28:41 np0005592157 systemd[1]: Started Session 13 of User zuul.
Jan 22 08:28:42 np0005592157 python3.9[58736]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:44 np0005592157 python3.9[58888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:45 np0005592157 python3.9[59011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088523.6745656-62-176555522362447/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:45 np0005592157 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 08:28:45 np0005592157 systemd[1]: session-13.scope: Consumed 1.793s CPU time.
Jan 22 08:28:45 np0005592157 systemd-logind[785]: Session 13 logged out. Waiting for processes to exit.
Jan 22 08:28:45 np0005592157 systemd-logind[785]: Removed session 13.
Jan 22 08:28:51 np0005592157 systemd-logind[785]: New session 14 of user zuul.
Jan 22 08:28:51 np0005592157 systemd[1]: Started Session 14 of User zuul.
Jan 22 08:28:52 np0005592157 python3.9[59189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:53 np0005592157 python3.9[59345]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:54 np0005592157 python3.9[59520]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:55 np0005592157 python3.9[59643]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769088534.158851-83-62064471448378/.source.json _original_basename=.l2xqb410 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:56 np0005592157 python3.9[59795]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:57 np0005592157 python3.9[59918]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088536.4111009-152-5169078492572/.source _original_basename=.948eltor follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:58 np0005592157 python3.9[60070]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:59 np0005592157 python3.9[60222]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:00 np0005592157 python3.9[60345]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088539.020289-224-4336920762172/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:00 np0005592157 python3.9[60497]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:01 np0005592157 python3.9[60620]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088540.4593313-224-93730785516557/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:02 np0005592157 python3.9[60772]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:03 np0005592157 python3.9[60924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:03 np0005592157 python3.9[61047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088542.895458-335-166320555539170/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:06 np0005592157 python3.9[61199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:06 np0005592157 python3.9[61322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088545.6491747-380-101380361915193/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:07 np0005592157 python3.9[61474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:07 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592157 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 08:29:08 np0005592157 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 08:29:09 np0005592157 python3.9[61702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:09 np0005592157 python3.9[61825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088548.9240594-449-211869533997379/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:10 np0005592157 python3.9[61977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:11 np0005592157 python3.9[62100]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088550.2168782-494-249544908224975/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:12 np0005592157 python3.9[62252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:12 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592157 systemd[1]: Starting Create netns directory...
Jan 22 08:29:12 np0005592157 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:29:12 np0005592157 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:29:12 np0005592157 systemd[1]: Finished Create netns directory.
Jan 22 08:29:13 np0005592157 python3.9[62479]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:29:13 np0005592157 network[62496]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:29:13 np0005592157 network[62497]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:29:13 np0005592157 network[62498]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:29:20 np0005592157 python3.9[62760]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:20 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:20 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:20 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:20 np0005592157 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 08:29:20 np0005592157 iptables.init[62799]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 08:29:20 np0005592157 iptables.init[62799]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 08:29:20 np0005592157 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 08:29:20 np0005592157 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 08:29:21 np0005592157 python3.9[62995]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:22 np0005592157 python3.9[63149]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:22 np0005592157 systemd[1]: Reloading.
Jan 22 08:29:23 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:23 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:23 np0005592157 systemd[1]: Starting Netfilter Tables...
Jan 22 08:29:23 np0005592157 systemd[1]: Finished Netfilter Tables.
Jan 22 08:29:24 np0005592157 python3.9[63340]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:30 np0005592157 python3.9[63493]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:31 np0005592157 python3.9[63618]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088570.1041317-701-265681916719409/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:32 np0005592157 python3.9[63771]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:29:32 np0005592157 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 08:29:32 np0005592157 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 08:29:33 np0005592157 python3.9[63927]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:34 np0005592157 python3.9[64079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:34 np0005592157 python3.9[64202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088573.6803646-794-185221053348666/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:35 np0005592157 python3.9[64354]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:29:35 np0005592157 systemd[1]: Starting Time & Date Service...
Jan 22 08:29:36 np0005592157 systemd[1]: Started Time & Date Service.
Jan 22 08:29:37 np0005592157 python3.9[64510]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:37 np0005592157 python3.9[64662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:38 np0005592157 python3.9[64785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088577.242004-899-245821248090581/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:39 np0005592157 python3.9[64937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:39 np0005592157 python3.9[65060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088578.6264112-944-270962952404567/.source.yaml _original_basename=.6o_u895_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:40 np0005592157 python3.9[65212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:40 np0005592157 python3.9[65335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088579.924072-989-198717208078627/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:41 np0005592157 python3.9[65487]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:42 np0005592157 python3.9[65640]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:43 np0005592157 python3[65793]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:29:44 np0005592157 python3.9[65945]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:44 np0005592157 python3.9[66068]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088583.7513273-1106-203046308031781/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:45 np0005592157 python3.9[66220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:46 np0005592157 python3.9[66343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088585.1239169-1151-261972419879040/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:47 np0005592157 python3.9[66495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:47 np0005592157 python3.9[66618]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088586.688347-1196-166478258066121/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:48 np0005592157 python3.9[66770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:48 np0005592157 python3.9[66893]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088587.9901004-1241-191367325283486/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:49 np0005592157 python3.9[67045]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:50 np0005592157 python3.9[67168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088589.3318813-1286-147766787363028/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:51 np0005592157 python3.9[67320]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:52 np0005592157 python3.9[67472]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:52 np0005592157 python3.9[67631]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:53 np0005592157 python3.9[67784]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:54 np0005592157 python3.9[67936]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:55 np0005592157 python3.9[68088]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:56 np0005592157 python3.9[68241]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:57 np0005592157 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 08:29:57 np0005592157 systemd[1]: session-14.scope: Consumed 36.637s CPU time.
Jan 22 08:29:57 np0005592157 systemd-logind[785]: Session 14 logged out. Waiting for processes to exit.
Jan 22 08:29:57 np0005592157 systemd-logind[785]: Removed session 14.
Jan 22 08:30:03 np0005592157 systemd-logind[785]: New session 15 of user zuul.
Jan 22 08:30:03 np0005592157 systemd[1]: Started Session 15 of User zuul.
Jan 22 08:30:04 np0005592157 python3.9[68422]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:30:06 np0005592157 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:30:06 np0005592157 python3.9[68576]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:08 np0005592157 python3.9[68728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:09 np0005592157 python3.9[68880]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012 create=True mode=0644 path=/tmp/ansible.uy17pyzv state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:10 np0005592157 python3.9[69032]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.uy17pyzv' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:10 np0005592157 python3.9[69186]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.uy17pyzv state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:11 np0005592157 systemd-logind[785]: Session 15 logged out. Waiting for processes to exit.
Jan 22 08:30:11 np0005592157 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 08:30:11 np0005592157 systemd[1]: session-15.scope: Consumed 3.693s CPU time.
Jan 22 08:30:11 np0005592157 systemd-logind[785]: Removed session 15.
Jan 22 08:30:17 np0005592157 systemd-logind[785]: New session 16 of user zuul.
Jan 22 08:30:17 np0005592157 systemd[1]: Started Session 16 of User zuul.
Jan 22 08:30:18 np0005592157 python3.9[69364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:19 np0005592157 python3.9[69520]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:30:20 np0005592157 python3.9[69674]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:30:21 np0005592157 python3.9[69827]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:22 np0005592157 python3.9[69980]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:23 np0005592157 python3.9[70134]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:24 np0005592157 python3.9[70289]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:24 np0005592157 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 08:30:24 np0005592157 systemd[1]: session-16.scope: Consumed 4.516s CPU time.
Jan 22 08:30:24 np0005592157 systemd-logind[785]: Session 16 logged out. Waiting for processes to exit.
Jan 22 08:30:24 np0005592157 systemd-logind[785]: Removed session 16.
Jan 22 08:30:30 np0005592157 systemd-logind[785]: New session 17 of user zuul.
Jan 22 08:30:30 np0005592157 systemd[1]: Started Session 17 of User zuul.
Jan 22 08:30:31 np0005592157 python3.9[70467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:32 np0005592157 python3.9[70623]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:30:33 np0005592157 python3.9[70707]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:30:36 np0005592157 python3.9[70858]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:37 np0005592157 python3.9[71009]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:30:38 np0005592157 python3.9[71159]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:38 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:30:39 np0005592157 python3.9[71310]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:40 np0005592157 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 08:30:40 np0005592157 systemd[1]: session-17.scope: Consumed 6.156s CPU time.
Jan 22 08:30:40 np0005592157 systemd-logind[785]: Session 17 logged out. Waiting for processes to exit.
Jan 22 08:30:40 np0005592157 systemd-logind[785]: Removed session 17.
Jan 22 08:30:44 np0005592157 chronyd[58555]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 22 08:30:48 np0005592157 systemd-logind[785]: New session 18 of user zuul.
Jan 22 08:30:48 np0005592157 systemd[1]: Started Session 18 of User zuul.
Jan 22 08:30:55 np0005592157 python3[72076]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:57 np0005592157 python3[72171]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:30:59 np0005592157 python3[72198]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:30:59 np0005592157 python3[72224]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:59 np0005592157 kernel: loop: module loaded
Jan 22 08:30:59 np0005592157 kernel: loop3: detected capacity change from 0 to 14680064
Jan 22 08:31:00 np0005592157 python3[72258]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:31:00 np0005592157 lvm[72261]: PV /dev/loop3 not used.
Jan 22 08:31:00 np0005592157 lvm[72263]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592157 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 08:31:00 np0005592157 lvm[72273]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592157 lvm[72273]: VG ceph_vg0 finished
Jan 22 08:31:00 np0005592157 lvm[72271]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 08:31:00 np0005592157 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 08:31:00 np0005592157 python3[72351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:31:01 np0005592157 python3[72424]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088660.6003122-37029-148274126894824/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:02 np0005592157 python3[72474]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:31:02 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:02 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:02 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:02 np0005592157 systemd[1]: Starting Ceph OSD losetup...
Jan 22 08:31:02 np0005592157 bash[72514]: /dev/loop3: [64513]:4328450 (/var/lib/ceph-osd-0.img)
Jan 22 08:31:02 np0005592157 systemd[1]: Finished Ceph OSD losetup.
Jan 22 08:31:02 np0005592157 lvm[72515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:02 np0005592157 lvm[72515]: VG ceph_vg0 finished
Jan 22 08:31:04 np0005592157 python3[72539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:31:07 np0005592157 python3[72632]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:31:09 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:31:09 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:31:10 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:31:10 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:31:10 np0005592157 systemd[1]: run-r4bf8841b20104d6892930528f210a36f.service: Deactivated successfully.
Jan 22 08:31:10 np0005592157 python3[72743]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:31:10 np0005592157 python3[72771]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:31:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:11 np0005592157 python3[72833]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:12 np0005592157 python3[72859]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:12 np0005592157 python3[72937]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:31:13 np0005592157 python3[73010]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088672.5005863-37220-18095215527090/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:13 np0005592157 python3[73112]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:31:14 np0005592157 python3[73185]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088673.6499512-37238-272701309560847/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:14 np0005592157 python3[73235]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:31:15 np0005592157 python3[73263]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:31:15 np0005592157 python3[73291]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:31:15 np0005592157 python3[73317]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:31:16 np0005592157 python3[73343]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 088fe176-0106-5401-803c-2da38b73b76a --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:31:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:16 np0005592157 systemd-logind[785]: New session 19 of user ceph-admin.
Jan 22 08:31:16 np0005592157 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 08:31:16 np0005592157 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 08:31:16 np0005592157 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 08:31:16 np0005592157 systemd[1]: Starting User Manager for UID 42477...
Jan 22 08:31:16 np0005592157 systemd[73363]: Queued start job for default target Main User Target.
Jan 22 08:31:16 np0005592157 systemd[73363]: Created slice User Application Slice.
Jan 22 08:31:16 np0005592157 systemd[73363]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 08:31:16 np0005592157 systemd[73363]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:31:16 np0005592157 systemd[73363]: Reached target Paths.
Jan 22 08:31:16 np0005592157 systemd[73363]: Reached target Timers.
Jan 22 08:31:16 np0005592157 systemd[73363]: Starting D-Bus User Message Bus Socket...
Jan 22 08:31:16 np0005592157 systemd[73363]: Starting Create User's Volatile Files and Directories...
Jan 22 08:31:16 np0005592157 systemd[73363]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:31:16 np0005592157 systemd[73363]: Reached target Sockets.
Jan 22 08:31:16 np0005592157 systemd[73363]: Finished Create User's Volatile Files and Directories.
Jan 22 08:31:16 np0005592157 systemd[73363]: Reached target Basic System.
Jan 22 08:31:16 np0005592157 systemd[73363]: Reached target Main User Target.
Jan 22 08:31:16 np0005592157 systemd[73363]: Startup finished in 149ms.
Jan 22 08:31:16 np0005592157 systemd[1]: Started User Manager for UID 42477.
Jan 22 08:31:16 np0005592157 systemd[1]: Started Session 19 of User ceph-admin.
Jan 22 08:31:17 np0005592157 systemd-logind[785]: Session 19 logged out. Waiting for processes to exit.
Jan 22 08:31:17 np0005592157 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 08:31:17 np0005592157 systemd-logind[785]: Removed session 19.
Jan 22 08:31:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-compat2606620566-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 08:31:27 np0005592157 systemd[1]: Stopping User Manager for UID 42477...
Jan 22 08:31:27 np0005592157 systemd[73363]: Activating special unit Exit the Session...
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped target Main User Target.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped target Basic System.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped target Paths.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped target Sockets.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped target Timers.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 08:31:27 np0005592157 systemd[73363]: Closed D-Bus User Message Bus Socket.
Jan 22 08:31:27 np0005592157 systemd[73363]: Stopped Create User's Volatile Files and Directories.
Jan 22 08:31:27 np0005592157 systemd[73363]: Removed slice User Application Slice.
Jan 22 08:31:27 np0005592157 systemd[73363]: Reached target Shutdown.
Jan 22 08:31:27 np0005592157 systemd[73363]: Finished Exit the Session.
Jan 22 08:31:27 np0005592157 systemd[73363]: Reached target Exit the Session.
Jan 22 08:31:27 np0005592157 systemd[1]: user@42477.service: Deactivated successfully.
Jan 22 08:31:27 np0005592157 systemd[1]: Stopped User Manager for UID 42477.
Jan 22 08:31:27 np0005592157 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 22 08:31:27 np0005592157 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 22 08:31:27 np0005592157 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 22 08:31:27 np0005592157 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 22 08:31:27 np0005592157 systemd[1]: Removed slice User Slice of UID 42477.
Jan 22 08:31:47 np0005592157 podman[73417]: 2026-01-22 13:31:47.07763583 +0000 UTC m=+29.983109875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.153808038 +0000 UTC m=+0.048626219 container create 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:31:47 np0005592157 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 08:31:47 np0005592157 systemd[1]: Started libpod-conmon-49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118.scope.
Jan 22 08:31:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.133975519 +0000 UTC m=+0.028793710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.270134896 +0000 UTC m=+0.164953097 container init 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.281806784 +0000 UTC m=+0.176624965 container start 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.285424443 +0000 UTC m=+0.180242664 container attach 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:47 np0005592157 gifted_khorana[73494]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 22 08:31:47 np0005592157 systemd[1]: libpod-49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118.scope: Deactivated successfully.
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.609223466 +0000 UTC m=+0.504041657 container died 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:31:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-752360fc3eaf51c57cd18554cf96ef9a6c39a934c642cf8e247da47e63880e12-merged.mount: Deactivated successfully.
Jan 22 08:31:47 np0005592157 podman[73478]: 2026-01-22 13:31:47.657430044 +0000 UTC m=+0.552248205 container remove 49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118 (image=quay.io/ceph/ceph:v18, name=gifted_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:31:47 np0005592157 systemd[1]: libpod-conmon-49883ae3bb1458fefec8cd4440afc4a48e115d91bedcbb72fc7d3d95e2649118.scope: Deactivated successfully.
Jan 22 08:31:47 np0005592157 podman[73513]: 2026-01-22 13:31:47.728523957 +0000 UTC m=+0.039115475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:50 np0005592157 podman[73513]: 2026-01-22 13:31:50.960389579 +0000 UTC m=+3.270981057 container create c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:51 np0005592157 systemd[1]: Started libpod-conmon-c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf.scope.
Jan 22 08:31:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:51 np0005592157 podman[73513]: 2026-01-22 13:31:51.068582912 +0000 UTC m=+3.379174480 container init c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:31:51 np0005592157 podman[73513]: 2026-01-22 13:31:51.078302542 +0000 UTC m=+3.388894060 container start c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:31:51 np0005592157 podman[73513]: 2026-01-22 13:31:51.083380578 +0000 UTC m=+3.393972096 container attach c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:51 np0005592157 romantic_thompson[73530]: 167 167
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73513]: 2026-01-22 13:31:51.085350326 +0000 UTC m=+3.395941824 container died c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:31:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fe653818780e166eeb206b3cfdd77ca5737260864c89f5c0f15ea106a814e976-merged.mount: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73513]: 2026-01-22 13:31:51.126131954 +0000 UTC m=+3.436723432 container remove c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf (image=quay.io/ceph/ceph:v18, name=romantic_thompson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:31:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-conmon-c404fb291d98a1c2f8f5559416b165429c92f0edd8b672595a8392c019d7d8bf.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.202590333 +0000 UTC m=+0.047180907 container create 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:51 np0005592157 systemd[1]: Started libpod-conmon-6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3.scope.
Jan 22 08:31:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.272953202 +0000 UTC m=+0.117543756 container init 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.18101427 +0000 UTC m=+0.025604824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.279030292 +0000 UTC m=+0.123620836 container start 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.282161359 +0000 UTC m=+0.126751943 container attach 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:51 np0005592157 wonderful_ellis[73562]: AQDHJnJpEEtoEhAABmQJ8xaXgDzkyecYkfLhTA==
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.313853522 +0000 UTC m=+0.158444066 container died 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:31:51 np0005592157 podman[73545]: 2026-01-22 13:31:51.357535701 +0000 UTC m=+0.202126255 container remove 6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3 (image=quay.io/ceph/ceph:v18, name=wonderful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-conmon-6a0fc83dc2cdc03d789d2ba782d03153be0544ce4f02b693feb6621c56af35a3.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.433082588 +0000 UTC m=+0.053232746 container create 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:51 np0005592157 systemd[1]: Started libpod-conmon-52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8.scope.
Jan 22 08:31:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.407422174 +0000 UTC m=+0.027572342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.502074373 +0000 UTC m=+0.122224621 container init 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.507437945 +0000 UTC m=+0.127588113 container start 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.511029184 +0000 UTC m=+0.131179432 container attach 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:51 np0005592157 eager_chebyshev[73598]: AQDHJnJpGb9gIBAAYr8wlZPLewypldCiinBVDQ==
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.548870949 +0000 UTC m=+0.169021107 container died 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:31:51 np0005592157 podman[73576]: 2026-01-22 13:31:51.596109266 +0000 UTC m=+0.216259424 container remove 52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8 (image=quay.io/ceph/ceph:v18, name=eager_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-conmon-52bbf00b975e9f1e310e2bcffe9fcd215220598c67fa70c9ecd253eb0e03caf8.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.662975958 +0000 UTC m=+0.044599113 container create 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:31:51 np0005592157 systemd[1]: Started libpod-conmon-9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a.scope.
Jan 22 08:31:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.73996436 +0000 UTC m=+0.121587545 container init 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.646092041 +0000 UTC m=+0.027715216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.747218299 +0000 UTC m=+0.128841454 container start 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.751152167 +0000 UTC m=+0.132775372 container attach 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:51 np0005592157 sweet_allen[73634]: AQDHJnJpMaihLRAARsXRHXP27aKsIf8lMSl3YA==
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.769214963 +0000 UTC m=+0.150838128 container died 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 08:31:51 np0005592157 podman[73618]: 2026-01-22 13:31:51.811121668 +0000 UTC m=+0.192744823 container remove 9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a (image=quay.io/ceph/ceph:v18, name=sweet_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-conmon-9fffd7b681736fdd3ec6087481eb994fd51680bb1fe2d11215a3e52c03753d6a.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.881147828 +0000 UTC m=+0.051050162 container create 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:31:51 np0005592157 systemd[1]: Started libpod-conmon-48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9.scope.
Jan 22 08:31:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef4f71e4943b8c095fa159bdf633b4b7985b4c9d5b765a608cd84106e208b719/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.945941769 +0000 UTC m=+0.115844083 container init 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.852543792 +0000 UTC m=+0.022446176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.9520376 +0000 UTC m=+0.121939924 container start 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.956036559 +0000 UTC m=+0.125938883 container attach 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:51 np0005592157 xenodochial_shamir[73670]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 22 08:31:51 np0005592157 xenodochial_shamir[73670]: setting min_mon_release = pacific
Jan 22 08:31:51 np0005592157 xenodochial_shamir[73670]: /usr/bin/monmaptool: set fsid to 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:51 np0005592157 xenodochial_shamir[73670]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 22 08:31:51 np0005592157 systemd[1]: libpod-48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9.scope: Deactivated successfully.
Jan 22 08:31:51 np0005592157 podman[73654]: 2026-01-22 13:31:51.992330375 +0000 UTC m=+0.162232689 container died 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:31:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ef4f71e4943b8c095fa159bdf633b4b7985b4c9d5b765a608cd84106e208b719-merged.mount: Deactivated successfully.
Jan 22 08:31:52 np0005592157 podman[73654]: 2026-01-22 13:31:52.035085012 +0000 UTC m=+0.204987336 container remove 48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9 (image=quay.io/ceph/ceph:v18, name=xenodochial_shamir, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:31:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:52 np0005592157 systemd[1]: libpod-conmon-48449e508e5840a879e6f7480044c1e9370e5940db07992b2856d78bd8dd40a9.scope: Deactivated successfully.
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.122125992 +0000 UTC m=+0.055878051 container create 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:31:52 np0005592157 systemd[1]: Started libpod-conmon-6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73.scope.
Jan 22 08:31:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49456dc456c8b68df1284e28f7c54bdfc0cf2364ffe0bdc61e9114dc9c2ae561/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49456dc456c8b68df1284e28f7c54bdfc0cf2364ffe0bdc61e9114dc9c2ae561/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49456dc456c8b68df1284e28f7c54bdfc0cf2364ffe0bdc61e9114dc9c2ae561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.095384842 +0000 UTC m=+0.029136981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49456dc456c8b68df1284e28f7c54bdfc0cf2364ffe0bdc61e9114dc9c2ae561/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.205389369 +0000 UTC m=+0.139141458 container init 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.219107258 +0000 UTC m=+0.152859317 container start 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.223760763 +0000 UTC m=+0.157512832 container attach 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:31:52 np0005592157 systemd[1]: libpod-6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73.scope: Deactivated successfully.
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.330746987 +0000 UTC m=+0.264499096 container died 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:31:52 np0005592157 podman[73687]: 2026-01-22 13:31:52.375093882 +0000 UTC m=+0.308845981 container remove 6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73 (image=quay.io/ceph/ceph:v18, name=serene_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:52 np0005592157 systemd[1]: libpod-conmon-6d4d62896784e25e88f3cb520e5ca5524e14196d8a6e535bfed58fa06969ad73.scope: Deactivated successfully.
Jan 22 08:31:52 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:52 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:52 np0005592157 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 08:31:52 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:53 np0005592157 systemd[1]: Reached target Ceph cluster 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:31:53 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:53 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:53 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:53 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:53 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:53 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:53 np0005592157 systemd[1]: Created slice Slice /system/ceph-088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:31:53 np0005592157 systemd[1]: Reached target System Time Set.
Jan 22 08:31:53 np0005592157 systemd[1]: Reached target System Time Synchronized.
Jan 22 08:31:53 np0005592157 systemd[1]: Starting Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:31:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:53 np0005592157 podman[73981]: 2026-01-22 13:31:53.953165682 +0000 UTC m=+0.049725399 container create dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298a5651643082907a0e56050a37e996f289d5375925239159ea69708c90d783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298a5651643082907a0e56050a37e996f289d5375925239159ea69708c90d783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298a5651643082907a0e56050a37e996f289d5375925239159ea69708c90d783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298a5651643082907a0e56050a37e996f289d5375925239159ea69708c90d783/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 podman[73981]: 2026-01-22 13:31:54.018802114 +0000 UTC m=+0.115361851 container init dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:54 np0005592157 podman[73981]: 2026-01-22 13:31:53.935877995 +0000 UTC m=+0.032437742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:54 np0005592157 podman[73981]: 2026-01-22 13:31:54.031509838 +0000 UTC m=+0.128069565 container start dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:31:54 np0005592157 bash[73981]: dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4
Jan 22 08:31:54 np0005592157 systemd[1]: Started Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: pidfile_write: ignore empty --pid-file
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: load: jerasure load: lrc 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Git sha 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: DB SUMMARY
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: DB Session ID:  TF2HEUQI2CVPPVSIAPI2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                                     Options.env: 0x55ab48a1cc40
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                                Options.info_log: 0x55ab4b24eec0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                                 Options.wal_dir: 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                    Options.write_buffer_manager: 0x55ab4b25eb40
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                               Options.row_cache: None
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                              Options.wal_filter: None
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.wal_compression: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.max_background_jobs: 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Compression algorithms supported:
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kZSTD supported: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:           Options.merge_operator: 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:        Options.compaction_filter: None
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab4b24eaa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab4b2471f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.compression: NoCompression
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.num_levels: 7
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0bb3b8fe-17e9-4c6f-9303-f02c31530e6c
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088714085662, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088714088453, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "TF2HEUQI2CVPPVSIAPI2", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088714088596, "job": 1, "event": "recovery_finished"}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab4b270e00
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: DB pointer 0x55ab4b2fa000
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab4b2471f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@-1(???) e0 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.120690041 +0000 UTC m=+0.048210732 container create a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-22T13:31:52.256485Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).mds e1 new map
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mkfs 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:54 np0005592157 systemd[1]: Started libpod-conmon-a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce.scope.
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.09920136 +0000 UTC m=+0.026722071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b8609bc287b6fde4427f5ccf3149598bb5fdaace7d5955c0732d6e3bf76c03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b8609bc287b6fde4427f5ccf3149598bb5fdaace7d5955c0732d6e3bf76c03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b8609bc287b6fde4427f5ccf3149598bb5fdaace7d5955c0732d6e3bf76c03/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.231750514 +0000 UTC m=+0.159271215 container init a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.244782386 +0000 UTC m=+0.172303077 container start a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.248148119 +0000 UTC m=+0.175668850 container attach a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 08:31:54 np0005592157 ceph-mon[74000]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744923036' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:  cluster:
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    id:     088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    health: HEALTH_OK
Jan 22 08:31:54 np0005592157 cranky_villani[74056]: 
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:  services:
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    mon: 1 daemons, quorum compute-0 (age 0.546872s)
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    mgr: no daemons active
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    osd: 0 osds: 0 up, 0 in
Jan 22 08:31:54 np0005592157 cranky_villani[74056]: 
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:  data:
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    pools:   0 pools, 0 pgs
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    objects: 0 objects, 0 B
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    usage:   0 B used, 0 B / 0 B avail
Jan 22 08:31:54 np0005592157 cranky_villani[74056]:    pgs:     
Jan 22 08:31:54 np0005592157 cranky_villani[74056]: 
Jan 22 08:31:54 np0005592157 systemd[1]: libpod-a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce.scope: Deactivated successfully.
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.695475591 +0000 UTC m=+0.622996272 container died a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:31:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e7b8609bc287b6fde4427f5ccf3149598bb5fdaace7d5955c0732d6e3bf76c03-merged.mount: Deactivated successfully.
Jan 22 08:31:54 np0005592157 podman[74001]: 2026-01-22 13:31:54.764409965 +0000 UTC m=+0.691930676 container remove a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce (image=quay.io/ceph/ceph:v18, name=cranky_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:54 np0005592157 systemd[1]: libpod-conmon-a15a4583e677258d8cd292200d0923845ba366385d2e8f8f259043d6a5822fce.scope: Deactivated successfully.
Jan 22 08:31:54 np0005592157 podman[74096]: 2026-01-22 13:31:54.86620749 +0000 UTC m=+0.067024817 container create f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:54 np0005592157 systemd[1]: Started libpod-conmon-f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b.scope.
Jan 22 08:31:54 np0005592157 podman[74096]: 2026-01-22 13:31:54.843494609 +0000 UTC m=+0.044311936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922e09373a2ff38fd8338ff1b5d01becb0357a54c4489436c734a1982e7215ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922e09373a2ff38fd8338ff1b5d01becb0357a54c4489436c734a1982e7215ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922e09373a2ff38fd8338ff1b5d01becb0357a54c4489436c734a1982e7215ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922e09373a2ff38fd8338ff1b5d01becb0357a54c4489436c734a1982e7215ea/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:54 np0005592157 podman[74096]: 2026-01-22 13:31:54.98195688 +0000 UTC m=+0.182774257 container init f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:31:54 np0005592157 podman[74096]: 2026-01-22 13:31:54.991079185 +0000 UTC m=+0.191896512 container start f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:31:54 np0005592157 podman[74096]: 2026-01-22 13:31:54.995813112 +0000 UTC m=+0.196630439 container attach f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:31:55 np0005592157 ceph-mon[74000]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:55 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 22 08:31:55 np0005592157 ceph-mon[74000]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1690237280' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:31:55 np0005592157 ceph-mon[74000]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1690237280' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:31:55 np0005592157 vibrant_swanson[74112]: 
Jan 22 08:31:55 np0005592157 vibrant_swanson[74112]: [global]
Jan 22 08:31:55 np0005592157 vibrant_swanson[74112]: #011fsid = 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:55 np0005592157 vibrant_swanson[74112]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 22 08:31:55 np0005592157 systemd[1]: libpod-f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b.scope: Deactivated successfully.
Jan 22 08:31:55 np0005592157 conmon[74112]: conmon f3a06db2e2227009bcfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b.scope/container/memory.events
Jan 22 08:31:55 np0005592157 podman[74096]: 2026-01-22 13:31:55.409268797 +0000 UTC m=+0.610086094 container died f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:31:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-922e09373a2ff38fd8338ff1b5d01becb0357a54c4489436c734a1982e7215ea-merged.mount: Deactivated successfully.
Jan 22 08:31:55 np0005592157 podman[74096]: 2026-01-22 13:31:55.451708936 +0000 UTC m=+0.652526223 container remove f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b (image=quay.io/ceph/ceph:v18, name=vibrant_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:31:55 np0005592157 systemd[1]: libpod-conmon-f3a06db2e2227009bcfccd393ec212774b7cc9b76f1408cbbfce1726690e784b.scope: Deactivated successfully.
Jan 22 08:31:55 np0005592157 podman[74150]: 2026-01-22 13:31:55.527474018 +0000 UTC m=+0.053988205 container create be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:55 np0005592157 systemd[1]: Started libpod-conmon-be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f.scope.
Jan 22 08:31:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:55 np0005592157 podman[74150]: 2026-01-22 13:31:55.502523101 +0000 UTC m=+0.029037328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb343ae52374eaf9c75487e4520f1d54681261c4db7e358db5d8c5c9fdde4c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb343ae52374eaf9c75487e4520f1d54681261c4db7e358db5d8c5c9fdde4c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb343ae52374eaf9c75487e4520f1d54681261c4db7e358db5d8c5c9fdde4c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb343ae52374eaf9c75487e4520f1d54681261c4db7e358db5d8c5c9fdde4c6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:55 np0005592157 podman[74150]: 2026-01-22 13:31:55.626749441 +0000 UTC m=+0.153263708 container init be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:31:55 np0005592157 podman[74150]: 2026-01-22 13:31:55.639236119 +0000 UTC m=+0.165750306 container start be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:31:55 np0005592157 podman[74150]: 2026-01-22 13:31:55.643811422 +0000 UTC m=+0.170325689 container attach be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215233065' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:31:56 np0005592157 systemd[1]: libpod-be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f.scope: Deactivated successfully.
Jan 22 08:31:56 np0005592157 podman[74150]: 2026-01-22 13:31:56.050312796 +0000 UTC m=+0.576827013 container died be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-eeb343ae52374eaf9c75487e4520f1d54681261c4db7e358db5d8c5c9fdde4c6-merged.mount: Deactivated successfully.
Jan 22 08:31:56 np0005592157 podman[74150]: 2026-01-22 13:31:56.096974979 +0000 UTC m=+0.623489146 container remove be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f (image=quay.io/ceph/ceph:v18, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:31:56 np0005592157 systemd[1]: libpod-conmon-be80e755c977a596a2739b6115bfedf71839f28092a3bf7019a38f85f38f4b7f.scope: Deactivated successfully.
Jan 22 08:31:56 np0005592157 systemd[1]: Stopping Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: from='client.? 192.168.122.100:0/1690237280' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: from='client.? 192.168.122.100:0/1690237280' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: mon.compute-0@0(leader) e1 shutdown
Jan 22 08:31:56 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[73996]: 2026-01-22T13:31:56.371+0000 7f58c278e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 22 08:31:56 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[73996]: 2026-01-22T13:31:56.371+0000 7f58c278e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 08:31:56 np0005592157 ceph-mon[74000]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 08:31:56 np0005592157 podman[74233]: 2026-01-22 13:31:56.422807919 +0000 UTC m=+0.090823825 container died dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:31:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-298a5651643082907a0e56050a37e996f289d5375925239159ea69708c90d783-merged.mount: Deactivated successfully.
Jan 22 08:31:56 np0005592157 podman[74233]: 2026-01-22 13:31:56.46088373 +0000 UTC m=+0.128899646 container remove dce0cf735954ffc728950870c6aa591d0624727e33618704c0c9e34dfe7d2ea4 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:31:56 np0005592157 bash[74233]: ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0
Jan 22 08:31:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:31:56 np0005592157 systemd[1]: ceph-088fe176-0106-5401-803c-2da38b73b76a@mon.compute-0.service: Deactivated successfully.
Jan 22 08:31:56 np0005592157 systemd[1]: Stopped Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:31:56 np0005592157 systemd[1]: ceph-088fe176-0106-5401-803c-2da38b73b76a@mon.compute-0.service: Consumed 1.122s CPU time.
Jan 22 08:31:56 np0005592157 systemd[1]: Starting Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:31:56 np0005592157 podman[74338]: 2026-01-22 13:31:56.829139758 +0000 UTC m=+0.038046931 container create 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:31:56 np0005592157 podman[74338]: 2026-01-22 13:31:56.809944954 +0000 UTC m=+0.018852157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f623ec851142dbe552dcaebd07f410baf65f769953227778d1639d4f9776f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f623ec851142dbe552dcaebd07f410baf65f769953227778d1639d4f9776f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f623ec851142dbe552dcaebd07f410baf65f769953227778d1639d4f9776f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f623ec851142dbe552dcaebd07f410baf65f769953227778d1639d4f9776f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:56 np0005592157 podman[74338]: 2026-01-22 13:31:56.92554682 +0000 UTC m=+0.134454083 container init 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:31:56 np0005592157 podman[74338]: 2026-01-22 13:31:56.935379993 +0000 UTC m=+0.144287206 container start 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:31:56 np0005592157 bash[74338]: 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e
Jan 22 08:31:56 np0005592157 systemd[1]: Started Ceph mon.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:31:56 np0005592157 ceph-mon[74359]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:31:56 np0005592157 ceph-mon[74359]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 08:31:56 np0005592157 ceph-mon[74359]: pidfile_write: ignore empty --pid-file
Jan 22 08:31:56 np0005592157 ceph-mon[74359]: load: jerasure load: lrc 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Git sha 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: DB SUMMARY
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: DB Session ID:  0YQIT4DMC1LDOZT4JVHT
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                                     Options.env: 0x5595ca046c40
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                                Options.info_log: 0x5595cc6c9040
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                                 Options.wal_dir: 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                    Options.write_buffer_manager: 0x5595cc6d8b40
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                               Options.row_cache: None
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                              Options.wal_filter: None
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.wal_compression: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.max_background_jobs: 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Compression algorithms supported:
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kZSTD supported: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:           Options.merge_operator: 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:        Options.compaction_filter: None
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5595cc6c8c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5595cc6c11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.compression: NoCompression
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.num_levels: 7
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0bb3b8fe-17e9-4c6f-9303-f02c31530e6c
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088717003027, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088717007678, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088717, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088717007772, "job": 1, "event": "recovery_finished"}
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5595cc6eae00
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: DB pointer 0x5595cc774000
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???) e1 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).mds e1 new map
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.041732171 +0000 UTC m=+0.059380158 container create 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:31:57 np0005592157 systemd[1]: Started libpod-conmon-7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a.scope.
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.023313046 +0000 UTC m=+0.040961053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdee937bdedd089b77c048248e35da699a4b86379af9f597b029f9cf6d8d902d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdee937bdedd089b77c048248e35da699a4b86379af9f597b029f9cf6d8d902d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdee937bdedd089b77c048248e35da699a4b86379af9f597b029f9cf6d8d902d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.145751521 +0000 UTC m=+0.163399588 container init 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.157394859 +0000 UTC m=+0.175042876 container start 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.161296365 +0000 UTC m=+0.178944382 container attach 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 22 08:31:57 np0005592157 systemd[1]: libpod-7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a.scope: Deactivated successfully.
Jan 22 08:31:57 np0005592157 podman[74360]: 2026-01-22 13:31:57.587586627 +0000 UTC m=+0.605234644 container died 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bdee937bdedd089b77c048248e35da699a4b86379af9f597b029f9cf6d8d902d-merged.mount: Deactivated successfully.
Jan 22 08:31:58 np0005592157 podman[74360]: 2026-01-22 13:31:58.256839782 +0000 UTC m=+1.274487789 container remove 7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a (image=quay.io/ceph/ceph:v18, name=sharp_grothendieck, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:31:58 np0005592157 podman[74453]: 2026-01-22 13:31:58.339799901 +0000 UTC m=+0.054542728 container create 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:31:58 np0005592157 systemd[1]: libpod-conmon-7eb045fb072751b2e09bfb2582eb2bfc2abd85f11f0c447370faaee4bbd88d1a.scope: Deactivated successfully.
Jan 22 08:31:58 np0005592157 systemd[1]: Started libpod-conmon-960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7.scope.
Jan 22 08:31:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:31:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d219453bcf9d0ad8837e3e255137b77e55747b3370b78384ffc5ac8891f7d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d219453bcf9d0ad8837e3e255137b77e55747b3370b78384ffc5ac8891f7d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d219453bcf9d0ad8837e3e255137b77e55747b3370b78384ffc5ac8891f7d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:58 np0005592157 podman[74453]: 2026-01-22 13:31:58.31787995 +0000 UTC m=+0.032622827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:58 np0005592157 podman[74453]: 2026-01-22 13:31:58.425915989 +0000 UTC m=+0.140658846 container init 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:58 np0005592157 podman[74453]: 2026-01-22 13:31:58.43160757 +0000 UTC m=+0.146350397 container start 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:31:58 np0005592157 podman[74453]: 2026-01-22 13:31:58.436254734 +0000 UTC m=+0.150997581 container attach 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:31:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 22 08:31:58 np0005592157 systemd[1]: libpod-960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7.scope: Deactivated successfully.
Jan 22 08:31:58 np0005592157 podman[74495]: 2026-01-22 13:31:58.918834567 +0000 UTC m=+0.026411753 container died 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:31:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-12d219453bcf9d0ad8837e3e255137b77e55747b3370b78384ffc5ac8891f7d9-merged.mount: Deactivated successfully.
Jan 22 08:31:58 np0005592157 podman[74495]: 2026-01-22 13:31:58.973608471 +0000 UTC m=+0.081185627 container remove 960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7 (image=quay.io/ceph/ceph:v18, name=eloquent_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:31:58 np0005592157 systemd[1]: libpod-conmon-960b236459ebdd73c6ea8e4b310d39f54c12cb010d1101a1feeb7ecded80a3a7.scope: Deactivated successfully.
Jan 22 08:31:59 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:59 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:59 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:59 np0005592157 systemd[1]: Reloading.
Jan 22 08:31:59 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:59 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:59 np0005592157 systemd[1]: Starting Ceph mgr.compute-0.nyayzk for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:31:59 np0005592157 podman[74636]: 2026-01-22 13:31:59.911399221 +0000 UTC m=+0.071588540 container create db0fcc1ac1d4b189513e284012343a0e4131e1528c0f9e0d4488429f571852d6 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:31:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606fe6d4727ad11ca7927cbd1e702ab358897fe4ea167e0f208fb59cbbfc1fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606fe6d4727ad11ca7927cbd1e702ab358897fe4ea167e0f208fb59cbbfc1fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606fe6d4727ad11ca7927cbd1e702ab358897fe4ea167e0f208fb59cbbfc1fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606fe6d4727ad11ca7927cbd1e702ab358897fe4ea167e0f208fb59cbbfc1fdb/merged/var/lib/ceph/mgr/ceph-compute-0.nyayzk supports timestamps until 2038 (0x7fffffff)
Jan 22 08:31:59 np0005592157 podman[74636]: 2026-01-22 13:31:59.880119418 +0000 UTC m=+0.040308797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:31:59 np0005592157 podman[74636]: 2026-01-22 13:31:59.98664831 +0000 UTC m=+0.146837669 container init db0fcc1ac1d4b189513e284012343a0e4131e1528c0f9e0d4488429f571852d6 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:31:59 np0005592157 podman[74636]: 2026-01-22 13:31:59.996732779 +0000 UTC m=+0.156922098 container start db0fcc1ac1d4b189513e284012343a0e4131e1528c0f9e0d4488429f571852d6 (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:00 np0005592157 bash[74636]: db0fcc1ac1d4b189513e284012343a0e4131e1528c0f9e0d4488429f571852d6
Jan 22 08:32:00 np0005592157 systemd[1]: Started Ceph mgr.compute-0.nyayzk for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: pidfile_write: ignore empty --pid-file
Jan 22 08:32:00 np0005592157 podman[74656]: 2026-01-22 13:32:00.108837319 +0000 UTC m=+0.062918925 container create 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:00 np0005592157 systemd[1]: Started libpod-conmon-5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70.scope.
Jan 22 08:32:00 np0005592157 podman[74656]: 2026-01-22 13:32:00.083460832 +0000 UTC m=+0.037542518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'alerts'
Jan 22 08:32:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e4e3541f3187f4adc4060f700053fa23debd03d35218c27c3a1e1c1f98d80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e4e3541f3187f4adc4060f700053fa23debd03d35218c27c3a1e1c1f98d80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e4e3541f3187f4adc4060f700053fa23debd03d35218c27c3a1e1c1f98d80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:00 np0005592157 podman[74656]: 2026-01-22 13:32:00.224815735 +0000 UTC m=+0.178897391 container init 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:00 np0005592157 podman[74656]: 2026-01-22 13:32:00.234385991 +0000 UTC m=+0.188467597 container start 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:00 np0005592157 podman[74656]: 2026-01-22 13:32:00.238346829 +0000 UTC m=+0.192428485 container attach 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'balancer'
Jan 22 08:32:00 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:00.490+0000 7fc7a224e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:32:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912347374' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]: 
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]: {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "health": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "status": "HEALTH_OK",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "checks": {},
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "mutes": []
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "election_epoch": 5,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "quorum": [
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        0
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    ],
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "quorum_names": [
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "compute-0"
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    ],
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "quorum_age": 3,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "monmap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "epoch": 1,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "min_mon_release_name": "reef",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_mons": 1
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "osdmap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "epoch": 1,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_osds": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_up_osds": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "osd_up_since": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_in_osds": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "osd_in_since": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_remapped_pgs": 0
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "pgmap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "pgs_by_state": [],
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_pgs": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_pools": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_objects": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "data_bytes": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "bytes_used": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "bytes_avail": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "bytes_total": 0
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "fsmap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "epoch": 1,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "by_rank": [],
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "up:standby": 0
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "mgrmap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "available": false,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "num_standbys": 0,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "modules": [
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:            "iostat",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:            "nfs",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:            "restful"
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        ],
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "services": {}
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "servicemap": {
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "epoch": 1,
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:        "services": {}
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    },
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]:    "progress_events": {}
Jan 22 08:32:00 np0005592157 heuristic_mclean[74696]: }
Jan 22 08:32:00 np0005592157 systemd[1]: libpod-5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70.scope: Deactivated successfully.
Jan 22 08:32:00 np0005592157 podman[74722]: 2026-01-22 13:32:00.673006168 +0000 UTC m=+0.026657060 container died 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b42e4e3541f3187f4adc4060f700053fa23debd03d35218c27c3a1e1c1f98d80-merged.mount: Deactivated successfully.
Jan 22 08:32:00 np0005592157 podman[74722]: 2026-01-22 13:32:00.714608956 +0000 UTC m=+0.068259808 container remove 5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70 (image=quay.io/ceph/ceph:v18, name=heuristic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:00 np0005592157 systemd[1]: libpod-conmon-5c15bba4be7f18472b10363b37a4416723d0a68f541f2093e2f733ad78fc8a70.scope: Deactivated successfully.
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:32:00 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'cephadm'
Jan 22 08:32:00 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:00.795+0000 7fc7a224e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:32:02 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'crash'
Jan 22 08:32:02 np0005592157 podman[74748]: 2026-01-22 13:32:02.782887296 +0000 UTC m=+0.030343511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:02 np0005592157 podman[74748]: 2026-01-22 13:32:02.936970163 +0000 UTC m=+0.184426358 container create 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:32:02 np0005592157 systemd[1]: Started libpod-conmon-08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513.scope.
Jan 22 08:32:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d38938f61b2090b39826c45bfa9c067aa90e2849d75fe62746816b68f0a2ac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d38938f61b2090b39826c45bfa9c067aa90e2849d75fe62746816b68f0a2ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d38938f61b2090b39826c45bfa9c067aa90e2849d75fe62746816b68f0a2ac0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:03 np0005592157 podman[74748]: 2026-01-22 13:32:03.028071184 +0000 UTC m=+0.275527439 container init 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:03 np0005592157 podman[74748]: 2026-01-22 13:32:03.035482747 +0000 UTC m=+0.282938932 container start 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:03 np0005592157 podman[74748]: 2026-01-22 13:32:03.039402034 +0000 UTC m=+0.286858229 container attach 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:32:03 np0005592157 ceph-mgr[74655]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:32:03 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'dashboard'
Jan 22 08:32:03 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:03.087+0000 7fc7a224e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:32:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698662995' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]: 
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]: {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "health": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "status": "HEALTH_OK",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "checks": {},
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "mutes": []
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "election_epoch": 5,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "quorum": [
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        0
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    ],
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "quorum_names": [
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "compute-0"
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    ],
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "quorum_age": 6,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "monmap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "epoch": 1,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "min_mon_release_name": "reef",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_mons": 1
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "osdmap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "epoch": 1,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_osds": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_up_osds": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "osd_up_since": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_in_osds": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "osd_in_since": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_remapped_pgs": 0
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "pgmap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "pgs_by_state": [],
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_pgs": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_pools": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_objects": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "data_bytes": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "bytes_used": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "bytes_avail": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "bytes_total": 0
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "fsmap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "epoch": 1,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "by_rank": [],
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "up:standby": 0
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "mgrmap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "available": false,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "num_standbys": 0,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "modules": [
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:            "iostat",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:            "nfs",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:            "restful"
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        ],
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "services": {}
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "servicemap": {
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "epoch": 1,
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:        "services": {}
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    },
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]:    "progress_events": {}
Jan 22 08:32:03 np0005592157 clever_lumiere[74764]: }
Jan 22 08:32:03 np0005592157 systemd[1]: libpod-08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513.scope: Deactivated successfully.
Jan 22 08:32:03 np0005592157 podman[74748]: 2026-01-22 13:32:03.418806718 +0000 UTC m=+0.666262913 container died 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:32:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7d38938f61b2090b39826c45bfa9c067aa90e2849d75fe62746816b68f0a2ac0-merged.mount: Deactivated successfully.
Jan 22 08:32:03 np0005592157 podman[74748]: 2026-01-22 13:32:03.469435239 +0000 UTC m=+0.716891434 container remove 08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513 (image=quay.io/ceph/ceph:v18, name=clever_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:03 np0005592157 systemd[1]: libpod-conmon-08e38e9621cd4e01319a8ea2bfafc7dfaa0b401927bcdf07f3a87e76f1ba2513.scope: Deactivated successfully.
Jan 22 08:32:04 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'devicehealth'
Jan 22 08:32:04 np0005592157 ceph-mgr[74655]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:32:04 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 08:32:04 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:04.852+0000 7fc7a224e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:32:05 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 08:32:05 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 08:32:05 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  from numpy import show_config as show_numpy_config
Jan 22 08:32:05 np0005592157 ceph-mgr[74655]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:32:05 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'influx'
Jan 22 08:32:05 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:05.389+0000 7fc7a224e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:32:05 np0005592157 podman[74800]: 2026-01-22 13:32:05.550647849 +0000 UTC m=+0.050015647 container create 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:32:05 np0005592157 systemd[1]: Started libpod-conmon-56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3.scope.
Jan 22 08:32:05 np0005592157 podman[74800]: 2026-01-22 13:32:05.524431591 +0000 UTC m=+0.023799449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c272dadbf12f287f48644c90ccb27c44e6f81a6dcd14e752992270d883a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c272dadbf12f287f48644c90ccb27c44e6f81a6dcd14e752992270d883a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c272dadbf12f287f48644c90ccb27c44e6f81a6dcd14e752992270d883a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:05 np0005592157 podman[74800]: 2026-01-22 13:32:05.642133759 +0000 UTC m=+0.141501577 container init 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 22 08:32:05 np0005592157 ceph-mgr[74655]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:32:05 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'insights'
Jan 22 08:32:05 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:05.644+0000 7fc7a224e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:32:05 np0005592157 podman[74800]: 2026-01-22 13:32:05.649610184 +0000 UTC m=+0.148977982 container start 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:05 np0005592157 podman[74800]: 2026-01-22 13:32:05.653971571 +0000 UTC m=+0.153339399 container attach 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:05 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'iostat'
Jan 22 08:32:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37102442' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:06 np0005592157 confident_faraday[74816]: 
Jan 22 08:32:06 np0005592157 confident_faraday[74816]: {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "health": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "status": "HEALTH_OK",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "checks": {},
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "mutes": []
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "election_epoch": 5,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "quorum": [
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        0
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    ],
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "quorum_names": [
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "compute-0"
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    ],
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "quorum_age": 9,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "monmap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "epoch": 1,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "min_mon_release_name": "reef",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_mons": 1
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "osdmap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "epoch": 1,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_osds": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_up_osds": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "osd_up_since": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_in_osds": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "osd_in_since": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_remapped_pgs": 0
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "pgmap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "pgs_by_state": [],
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_pgs": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_pools": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_objects": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "data_bytes": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "bytes_used": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "bytes_avail": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "bytes_total": 0
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "fsmap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "epoch": 1,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "by_rank": [],
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "up:standby": 0
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "mgrmap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "available": false,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "num_standbys": 0,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "modules": [
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:            "iostat",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:            "nfs",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:            "restful"
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        ],
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "services": {}
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "servicemap": {
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "epoch": 1,
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:        "services": {}
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    },
Jan 22 08:32:06 np0005592157 confident_faraday[74816]:    "progress_events": {}
Jan 22 08:32:06 np0005592157 confident_faraday[74816]: }
Jan 22 08:32:06 np0005592157 systemd[1]: libpod-56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3.scope: Deactivated successfully.
Jan 22 08:32:06 np0005592157 podman[74800]: 2026-01-22 13:32:06.067019927 +0000 UTC m=+0.566387785 container died 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 08:32:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-87a2c272dadbf12f287f48644c90ccb27c44e6f81a6dcd14e752992270d883a3-merged.mount: Deactivated successfully.
Jan 22 08:32:06 np0005592157 podman[74800]: 2026-01-22 13:32:06.135384176 +0000 UTC m=+0.634752004 container remove 56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3 (image=quay.io/ceph/ceph:v18, name=confident_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:32:06 np0005592157 ceph-mgr[74655]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:32:06 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'k8sevents'
Jan 22 08:32:06 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:06.137+0000 7fc7a224e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:32:06 np0005592157 systemd[1]: libpod-conmon-56ba69b4d4027af700afc471b323d92be0b9637f02c926c8f368d8770baf61a3.scope: Deactivated successfully.
Jan 22 08:32:07 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'localpool'
Jan 22 08:32:08 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.243571113 +0000 UTC m=+0.071819406 container create ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:08 np0005592157 systemd[1]: Started libpod-conmon-ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c.scope.
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.217226752 +0000 UTC m=+0.045475125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503737b420eb70009251fdb44918fc15e6457d45aeff2fae7ed71f8a97b53cff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503737b420eb70009251fdb44918fc15e6457d45aeff2fae7ed71f8a97b53cff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503737b420eb70009251fdb44918fc15e6457d45aeff2fae7ed71f8a97b53cff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.331616678 +0000 UTC m=+0.159864981 container init ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.34341484 +0000 UTC m=+0.171663133 container start ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.347004598 +0000 UTC m=+0.175252981 container attach ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:32:08 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'mirroring'
Jan 22 08:32:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2934223079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]: 
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]: {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "health": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "status": "HEALTH_OK",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "checks": {},
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "mutes": []
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "election_epoch": 5,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "quorum": [
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        0
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    ],
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "quorum_names": [
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "compute-0"
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    ],
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "quorum_age": 11,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "monmap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "epoch": 1,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "min_mon_release_name": "reef",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_mons": 1
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "osdmap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "epoch": 1,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_osds": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_up_osds": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "osd_up_since": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_in_osds": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "osd_in_since": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_remapped_pgs": 0
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "pgmap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "pgs_by_state": [],
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_pgs": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_pools": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_objects": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "data_bytes": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "bytes_used": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "bytes_avail": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "bytes_total": 0
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "fsmap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "epoch": 1,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "by_rank": [],
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "up:standby": 0
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "mgrmap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "available": false,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "num_standbys": 0,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "modules": [
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:            "iostat",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:            "nfs",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:            "restful"
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        ],
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "services": {}
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "servicemap": {
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "epoch": 1,
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:        "services": {}
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    },
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]:    "progress_events": {}
Jan 22 08:32:08 np0005592157 jovial_keldysh[74870]: }
Jan 22 08:32:08 np0005592157 systemd[1]: libpod-ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c.scope: Deactivated successfully.
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.786190068 +0000 UTC m=+0.614438401 container died ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-503737b420eb70009251fdb44918fc15e6457d45aeff2fae7ed71f8a97b53cff-merged.mount: Deactivated successfully.
Jan 22 08:32:08 np0005592157 podman[74853]: 2026-01-22 13:32:08.851728248 +0000 UTC m=+0.679976551 container remove ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c (image=quay.io/ceph/ceph:v18, name=jovial_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:08 np0005592157 systemd[1]: libpod-conmon-ee2ea8380c1320b211045d7952803a79f0af440bcfb14f18a51effc8a3549c2c.scope: Deactivated successfully.
Jan 22 08:32:09 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'nfs'
Jan 22 08:32:09 np0005592157 ceph-mgr[74655]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:32:09 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'orchestrator'
Jan 22 08:32:09 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:09.699+0000 7fc7a224e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:10.391+0000 7fc7a224e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'osd_support'
Jan 22 08:32:10 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:10.652+0000 7fc7a224e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 08:32:10 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:10.894+0000 7fc7a224e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:32:10 np0005592157 podman[74909]: 2026-01-22 13:32:10.948512963 +0000 UTC m=+0.065739215 container create c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:10.913875498 +0000 UTC m=+0.031101840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:11 np0005592157 systemd[1]: Started libpod-conmon-c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c.scope.
Jan 22 08:32:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dcafed9326e198cb2ec46430db29047423ae7ff90fe92c0e9108110fe36f04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dcafed9326e198cb2ec46430db29047423ae7ff90fe92c0e9108110fe36f04b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dcafed9326e198cb2ec46430db29047423ae7ff90fe92c0e9108110fe36f04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:11.078584107 +0000 UTC m=+0.195810369 container init c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:11.086271217 +0000 UTC m=+0.203497449 container start c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:11.090397319 +0000 UTC m=+0.207623571 container attach c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:11 np0005592157 ceph-mgr[74655]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:32:11 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'progress'
Jan 22 08:32:11 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:11.181+0000 7fc7a224e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:32:11 np0005592157 ceph-mgr[74655]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:32:11 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'prometheus'
Jan 22 08:32:11 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:11.413+0000 7fc7a224e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:32:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364107878' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:11 np0005592157 fervent_euler[74925]: 
Jan 22 08:32:11 np0005592157 fervent_euler[74925]: {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "health": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "status": "HEALTH_OK",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "checks": {},
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "mutes": []
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "election_epoch": 5,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "quorum": [
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        0
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    ],
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "quorum_names": [
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "compute-0"
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    ],
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "quorum_age": 14,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "monmap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "epoch": 1,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "min_mon_release_name": "reef",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_mons": 1
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "osdmap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "epoch": 1,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_osds": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_up_osds": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "osd_up_since": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_in_osds": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "osd_in_since": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_remapped_pgs": 0
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "pgmap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "pgs_by_state": [],
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_pgs": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_pools": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_objects": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "data_bytes": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "bytes_used": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "bytes_avail": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "bytes_total": 0
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "fsmap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "epoch": 1,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "by_rank": [],
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "up:standby": 0
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "mgrmap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "available": false,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "num_standbys": 0,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "modules": [
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:            "iostat",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:            "nfs",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:            "restful"
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        ],
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "services": {}
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "servicemap": {
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "epoch": 1,
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:        "services": {}
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    },
Jan 22 08:32:11 np0005592157 fervent_euler[74925]:    "progress_events": {}
Jan 22 08:32:11 np0005592157 fervent_euler[74925]: }
Jan 22 08:32:11 np0005592157 systemd[1]: libpod-c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c.scope: Deactivated successfully.
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:11.498117662 +0000 UTC m=+0.615343894 container died c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:32:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4dcafed9326e198cb2ec46430db29047423ae7ff90fe92c0e9108110fe36f04b-merged.mount: Deactivated successfully.
Jan 22 08:32:11 np0005592157 podman[74909]: 2026-01-22 13:32:11.54537311 +0000 UTC m=+0.662599372 container remove c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c (image=quay.io/ceph/ceph:v18, name=fervent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:11 np0005592157 systemd[1]: libpod-conmon-c693be35ec5f9d336d28489fec231aa3baf8a696562a1207671a761990981d8c.scope: Deactivated successfully.
Jan 22 08:32:12 np0005592157 ceph-mgr[74655]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:32:12 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rbd_support'
Jan 22 08:32:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:12.423+0000 7fc7a224e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:32:12 np0005592157 ceph-mgr[74655]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:32:12 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'restful'
Jan 22 08:32:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:12.751+0000 7fc7a224e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:32:13 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rgw'
Jan 22 08:32:13 np0005592157 podman[74963]: 2026-01-22 13:32:13.655769431 +0000 UTC m=+0.075635280 container create 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:32:13 np0005592157 systemd[1]: Started libpod-conmon-08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac.scope.
Jan 22 08:32:13 np0005592157 podman[74963]: 2026-01-22 13:32:13.621547595 +0000 UTC m=+0.041413484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7d2592e049fea5bb4b139a7ccbab5aa43ed84b565b9fe1a207dc0fa797e6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7d2592e049fea5bb4b139a7ccbab5aa43ed84b565b9fe1a207dc0fa797e6d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7d2592e049fea5bb4b139a7ccbab5aa43ed84b565b9fe1a207dc0fa797e6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:13 np0005592157 podman[74963]: 2026-01-22 13:32:13.757225447 +0000 UTC m=+0.177091346 container init 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 08:32:13 np0005592157 podman[74963]: 2026-01-22 13:32:13.766474896 +0000 UTC m=+0.186340735 container start 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:32:13 np0005592157 podman[74963]: 2026-01-22 13:32:13.771121201 +0000 UTC m=+0.190987100 container attach 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694292001' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:14 np0005592157 busy_tharp[74980]: 
Jan 22 08:32:14 np0005592157 busy_tharp[74980]: {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "health": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "status": "HEALTH_OK",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "checks": {},
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "mutes": []
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "election_epoch": 5,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "quorum": [
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        0
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    ],
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "quorum_names": [
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "compute-0"
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    ],
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "quorum_age": 17,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "monmap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "epoch": 1,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "min_mon_release_name": "reef",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_mons": 1
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "osdmap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "epoch": 1,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_osds": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_up_osds": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "osd_up_since": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_in_osds": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "osd_in_since": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_remapped_pgs": 0
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "pgmap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "pgs_by_state": [],
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_pgs": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_pools": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_objects": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "data_bytes": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "bytes_used": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "bytes_avail": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "bytes_total": 0
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "fsmap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "epoch": 1,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "by_rank": [],
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "up:standby": 0
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "mgrmap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "available": false,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "num_standbys": 0,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "modules": [
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:            "iostat",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:            "nfs",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:            "restful"
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        ],
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "services": {}
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "servicemap": {
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "epoch": 1,
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:        "services": {}
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    },
Jan 22 08:32:14 np0005592157 busy_tharp[74980]:    "progress_events": {}
Jan 22 08:32:14 np0005592157 busy_tharp[74980]: }
Jan 22 08:32:14 np0005592157 ceph-mgr[74655]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:32:14 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:14.183+0000 7fc7a224e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:32:14 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rook'
Jan 22 08:32:14 np0005592157 systemd[1]: libpod-08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac.scope: Deactivated successfully.
Jan 22 08:32:14 np0005592157 podman[75006]: 2026-01-22 13:32:14.267445343 +0000 UTC m=+0.042880230 container died 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:32:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fd7d2592e049fea5bb4b139a7ccbab5aa43ed84b565b9fe1a207dc0fa797e6d9-merged.mount: Deactivated successfully.
Jan 22 08:32:14 np0005592157 podman[75006]: 2026-01-22 13:32:14.324641167 +0000 UTC m=+0.100076004 container remove 08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac (image=quay.io/ceph/ceph:v18, name=busy_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 08:32:14 np0005592157 systemd[1]: libpod-conmon-08586f9161e347a299b7f4c4a5d975aefe61e5cd39bc6a5c2ca41eb1db1e7eac.scope: Deactivated successfully.
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'selftest'
Jan 22 08:32:16 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:16.259+0000 7fc7a224e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 podman[75020]: 2026-01-22 13:32:16.430812413 +0000 UTC m=+0.064719270 container create b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:32:16 np0005592157 systemd[1]: Started libpod-conmon-b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677.scope.
Jan 22 08:32:16 np0005592157 podman[75020]: 2026-01-22 13:32:16.407128858 +0000 UTC m=+0.041035725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'snap_schedule'
Jan 22 08:32:16 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:16.508+0000 7fc7a224e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e933bc68fba1265b83c509943448ac80bd59c8dd3116c74269521a014ee7dfad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e933bc68fba1265b83c509943448ac80bd59c8dd3116c74269521a014ee7dfad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e933bc68fba1265b83c509943448ac80bd59c8dd3116c74269521a014ee7dfad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:16 np0005592157 podman[75020]: 2026-01-22 13:32:16.547489786 +0000 UTC m=+0.181396653 container init b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:16 np0005592157 podman[75020]: 2026-01-22 13:32:16.558017776 +0000 UTC m=+0.191924623 container start b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:32:16 np0005592157 podman[75020]: 2026-01-22 13:32:16.562596129 +0000 UTC m=+0.196502986 container attach b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'stats'
Jan 22 08:32:16 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:16.757+0000 7fc7a224e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:32:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129408999' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]: 
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]: {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "health": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "status": "HEALTH_OK",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "checks": {},
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "mutes": []
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "election_epoch": 5,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "quorum": [
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        0
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    ],
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "quorum_names": [
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "compute-0"
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    ],
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "quorum_age": 19,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "monmap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "epoch": 1,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "min_mon_release_name": "reef",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_mons": 1
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "osdmap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "epoch": 1,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_osds": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_up_osds": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "osd_up_since": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_in_osds": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "osd_in_since": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_remapped_pgs": 0
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "pgmap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "pgs_by_state": [],
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_pgs": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_pools": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_objects": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "data_bytes": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "bytes_used": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "bytes_avail": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "bytes_total": 0
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "fsmap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "epoch": 1,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "by_rank": [],
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "up:standby": 0
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "mgrmap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "available": false,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "num_standbys": 0,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "modules": [
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:            "iostat",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:            "nfs",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:            "restful"
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        ],
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "services": {}
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "servicemap": {
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "epoch": 1,
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:        "services": {}
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    },
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]:    "progress_events": {}
Jan 22 08:32:17 np0005592157 exciting_snyder[75036]: }
Jan 22 08:32:17 np0005592157 systemd[1]: libpod-b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677.scope: Deactivated successfully.
Jan 22 08:32:17 np0005592157 podman[75020]: 2026-01-22 13:32:17.135032252 +0000 UTC m=+0.768939069 container died b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e933bc68fba1265b83c509943448ac80bd59c8dd3116c74269521a014ee7dfad-merged.mount: Deactivated successfully.
Jan 22 08:32:17 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'status'
Jan 22 08:32:17 np0005592157 podman[75020]: 2026-01-22 13:32:17.190720778 +0000 UTC m=+0.824627605 container remove b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:17 np0005592157 systemd[1]: libpod-conmon-b7e8933592c9b6f19a5104092295fd97c468baf70def25f1982f36688e25e677.scope: Deactivated successfully.
Jan 22 08:32:17 np0005592157 ceph-mgr[74655]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:32:17 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'telegraf'
Jan 22 08:32:17 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:17.461+0000 7fc7a224e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:32:17 np0005592157 ceph-mgr[74655]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:32:17 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'telemetry'
Jan 22 08:32:17 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:17.701+0000 7fc7a224e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:32:18 np0005592157 ceph-mgr[74655]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:32:18 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 08:32:18 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:18.310+0000 7fc7a224e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:32:18 np0005592157 ceph-mgr[74655]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:18 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'volumes'
Jan 22 08:32:18 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:18.976+0000 7fc7a224e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.29627801 +0000 UTC m=+0.080841258 container create 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.246749596 +0000 UTC m=+0.031312864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:19 np0005592157 systemd[1]: Started libpod-conmon-0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244.scope.
Jan 22 08:32:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba54869e63d03856e808d7ac7395c011b9cc7c4d3c69fcd3409322ba7a1a437/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba54869e63d03856e808d7ac7395c011b9cc7c4d3c69fcd3409322ba7a1a437/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba54869e63d03856e808d7ac7395c011b9cc7c4d3c69fcd3409322ba7a1a437/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.41124333 +0000 UTC m=+0.195806648 container init 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.421262618 +0000 UTC m=+0.205825876 container start 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.426554988 +0000 UTC m=+0.211118236 container attach 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:19 np0005592157 ceph-mgr[74655]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:32:19 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'zabbix'
Jan 22 08:32:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:19.851+0000 7fc7a224e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:32:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459628142' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:19 np0005592157 nervous_jang[75088]: 
Jan 22 08:32:19 np0005592157 nervous_jang[75088]: {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "health": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "status": "HEALTH_OK",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "checks": {},
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "mutes": []
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "election_epoch": 5,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "quorum": [
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        0
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    ],
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "quorum_names": [
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "compute-0"
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    ],
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "quorum_age": 22,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "monmap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "epoch": 1,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "min_mon_release_name": "reef",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_mons": 1
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "osdmap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "epoch": 1,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_osds": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_up_osds": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "osd_up_since": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_in_osds": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "osd_in_since": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_remapped_pgs": 0
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "pgmap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "pgs_by_state": [],
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_pgs": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_pools": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_objects": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "data_bytes": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "bytes_used": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "bytes_avail": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "bytes_total": 0
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "fsmap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "epoch": 1,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "by_rank": [],
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "up:standby": 0
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "mgrmap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "available": false,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "num_standbys": 0,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "modules": [
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:            "iostat",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:            "nfs",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:            "restful"
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        ],
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "services": {}
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "servicemap": {
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "epoch": 1,
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:        "services": {}
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    },
Jan 22 08:32:19 np0005592157 nervous_jang[75088]:    "progress_events": {}
Jan 22 08:32:19 np0005592157 nervous_jang[75088]: }
Jan 22 08:32:19 np0005592157 systemd[1]: libpod-0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244.scope: Deactivated successfully.
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.885263542 +0000 UTC m=+0.669826770 container died 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2ba54869e63d03856e808d7ac7395c011b9cc7c4d3c69fcd3409322ba7a1a437-merged.mount: Deactivated successfully.
Jan 22 08:32:19 np0005592157 podman[75072]: 2026-01-22 13:32:19.941193034 +0000 UTC m=+0.725756262 container remove 0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244 (image=quay.io/ceph/ceph:v18, name=nervous_jang, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:19 np0005592157 systemd[1]: libpod-conmon-0f3c5fadd00c4d65da3f6286e7ac18f9fca441ec9bd8efc2e42de90e7699d244.scope: Deactivated successfully.
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:32:20 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:20.114+0000 7fc7a224e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: ms_deliver_dispatch: unhandled message 0x55fddfe02f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nyayzk
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr handle_mgr_map Activating!
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr handle_mgr_map I am now activating
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.nyayzk(active, starting, since 0.013539s)
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e1 all = 1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nyayzk", "id": "compute-0.nyayzk"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nyayzk", "id": "compute-0.nyayzk"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: balancer
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Manager daemon compute-0.nyayzk is now available
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: crash
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer INFO root] Starting
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: devicehealth
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:32:20
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [balancer INFO root] No pools available
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Starting
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: iostat
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: nfs
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: orchestrator
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: pg_autoscaler
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: progress
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [progress INFO root] Loading...
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [progress INFO root] No stored events to load
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [progress INFO root] Loaded [] historic events
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] recovery thread starting
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] starting setup
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: rbd_support
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: restful
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: status
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: telemetry
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [restful WARNING root] server not running: no certificate configured
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] PerfHandler: starting
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TaskHandler: starting
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"} v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] setup complete
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: volumes
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: Activating manager daemon compute-0.nyayzk
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: Manager daemon compute-0.nyayzk is now available
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"}]: dispatch
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:20 np0005592157 ceph-mon[74359]: from='mgr.14102 192.168.122.100:0/1527446061' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.nyayzk(active, since 1.02599s)
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.0125118 +0000 UTC m=+0.037345734 container create 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:32:22 np0005592157 systemd[1]: Started libpod-conmon-55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73.scope.
Jan 22 08:32:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd90674d735750d167e1ef48f5f712d557cd93819cd20843886307a78defd87/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd90674d735750d167e1ef48f5f712d557cd93819cd20843886307a78defd87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd90674d735750d167e1ef48f5f712d557cd93819cd20843886307a78defd87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:21.996556046 +0000 UTC m=+0.021389970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.102092863 +0000 UTC m=+0.126926787 container init 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.11368546 +0000 UTC m=+0.138519384 container start 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.117113054 +0000 UTC m=+0.141947018 container attach 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:32:22 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.nyayzk(active, since 2s)
Jan 22 08:32:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 22 08:32:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732788645' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]: 
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]: {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "health": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "status": "HEALTH_OK",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "checks": {},
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "mutes": []
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "election_epoch": 5,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "quorum": [
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        0
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    ],
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "quorum_names": [
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "compute-0"
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    ],
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "quorum_age": 25,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "monmap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "epoch": 1,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "min_mon_release_name": "reef",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_mons": 1
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "osdmap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "epoch": 1,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_osds": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_up_osds": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "osd_up_since": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_in_osds": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "osd_in_since": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_remapped_pgs": 0
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "pgmap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "pgs_by_state": [],
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_pgs": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_pools": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_objects": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "data_bytes": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "bytes_used": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "bytes_avail": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "bytes_total": 0
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "fsmap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "epoch": 1,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "by_rank": [],
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "up:standby": 0
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "mgrmap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "available": true,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "num_standbys": 0,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "modules": [
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:            "iostat",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:            "nfs",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:            "restful"
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        ],
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "services": {}
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "servicemap": {
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "epoch": 1,
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "modified": "2026-01-22T13:31:54.130920+0000",
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:        "services": {}
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    },
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]:    "progress_events": {}
Jan 22 08:32:22 np0005592157 crazy_chebyshev[75223]: }
Jan 22 08:32:22 np0005592157 systemd[1]: libpod-55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73.scope: Deactivated successfully.
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.797806672 +0000 UTC m=+0.822640636 container died 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:32:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1fd90674d735750d167e1ef48f5f712d557cd93819cd20843886307a78defd87-merged.mount: Deactivated successfully.
Jan 22 08:32:22 np0005592157 podman[75207]: 2026-01-22 13:32:22.85280571 +0000 UTC m=+0.877639654 container remove 55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73 (image=quay.io/ceph/ceph:v18, name=crazy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:32:22 np0005592157 systemd[1]: libpod-conmon-55189a49312a225bd805c98b31c2d4d67426afb3879fde8472e2fdca47787a73.scope: Deactivated successfully.
Jan 22 08:32:22 np0005592157 podman[75261]: 2026-01-22 13:32:22.936606141 +0000 UTC m=+0.055911753 container create 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:22 np0005592157 systemd[1]: Started libpod-conmon-99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a.scope.
Jan 22 08:32:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3430aaeb1e2e680f470838c3a7f9a1e74454ea53637bae56f30f2e1fa3a00de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3430aaeb1e2e680f470838c3a7f9a1e74454ea53637bae56f30f2e1fa3a00de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3430aaeb1e2e680f470838c3a7f9a1e74454ea53637bae56f30f2e1fa3a00de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3430aaeb1e2e680f470838c3a7f9a1e74454ea53637bae56f30f2e1fa3a00de/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:22 np0005592157 podman[75261]: 2026-01-22 13:32:22.997104305 +0000 UTC m=+0.116409977 container init 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:23 np0005592157 podman[75261]: 2026-01-22 13:32:22.909531332 +0000 UTC m=+0.028837034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:23 np0005592157 podman[75261]: 2026-01-22 13:32:23.008286932 +0000 UTC m=+0.127592584 container start 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:23 np0005592157 podman[75261]: 2026-01-22 13:32:23.012376053 +0000 UTC m=+0.131681705 container attach 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 22 08:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1627422301' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:32:23 np0005592157 systemd[1]: libpod-99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a.scope: Deactivated successfully.
Jan 22 08:32:23 np0005592157 podman[75261]: 2026-01-22 13:32:23.549327899 +0000 UTC m=+0.668633551 container died 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:32:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e3430aaeb1e2e680f470838c3a7f9a1e74454ea53637bae56f30f2e1fa3a00de-merged.mount: Deactivated successfully.
Jan 22 08:32:23 np0005592157 podman[75261]: 2026-01-22 13:32:23.717684519 +0000 UTC m=+0.836990131 container remove 99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a (image=quay.io/ceph/ceph:v18, name=eager_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:32:23 np0005592157 systemd[1]: libpod-conmon-99cec55f31c7288b6e915a81345d60348629438446e7a2a30e002da99718c67a.scope: Deactivated successfully.
Jan 22 08:32:23 np0005592157 podman[75317]: 2026-01-22 13:32:23.778652975 +0000 UTC m=+0.041748402 container create 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:23 np0005592157 systemd[1]: Started libpod-conmon-64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f.scope.
Jan 22 08:32:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cedc513ba6ce23815259f21a0e273d6cce43d21272c1a30778bf35c1b01356/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cedc513ba6ce23815259f21a0e273d6cce43d21272c1a30778bf35c1b01356/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cedc513ba6ce23815259f21a0e273d6cce43d21272c1a30778bf35c1b01356/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:23 np0005592157 podman[75317]: 2026-01-22 13:32:23.761987233 +0000 UTC m=+0.025082680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:23 np0005592157 podman[75317]: 2026-01-22 13:32:23.867756797 +0000 UTC m=+0.130852294 container init 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:32:23 np0005592157 podman[75317]: 2026-01-22 13:32:23.873592281 +0000 UTC m=+0.136687748 container start 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:23 np0005592157 podman[75317]: 2026-01-22 13:32:23.878300627 +0000 UTC m=+0.141396084 container attach 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:24 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:24 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1627422301' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:32:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 22 08:32:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3185909986' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 22 08:32:25 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3185909986' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 22 08:32:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3185909986' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 22 08:32:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.nyayzk(active, since 5s)
Jan 22 08:32:25 np0005592157 systemd[1]: libpod-64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f.scope: Deactivated successfully.
Jan 22 08:32:25 np0005592157 podman[75317]: 2026-01-22 13:32:25.217181157 +0000 UTC m=+1.480276624 container died 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:32:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-71cedc513ba6ce23815259f21a0e273d6cce43d21272c1a30778bf35c1b01356-merged.mount: Deactivated successfully.
Jan 22 08:32:25 np0005592157 podman[75317]: 2026-01-22 13:32:25.274060882 +0000 UTC m=+1.537156319 container remove 64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f (image=quay.io/ceph/ceph:v18, name=infallible_newton, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:25 np0005592157 systemd[1]: libpod-conmon-64e5cfcbd13b8c8057d2b9351233e540230c9ffcf711b1bd021d3cf0daa5683f.scope: Deactivated successfully.
Jan 22 08:32:25 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: ignoring --setuser ceph since I am not root
Jan 22 08:32:25 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: ignoring --setgroup ceph since I am not root
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: pidfile_write: ignore empty --pid-file
Jan 22 08:32:25 np0005592157 podman[75373]: 2026-01-22 13:32:25.347727462 +0000 UTC m=+0.046925980 container create 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:32:25 np0005592157 systemd[1]: Started libpod-conmon-33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365.scope.
Jan 22 08:32:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:25 np0005592157 podman[75373]: 2026-01-22 13:32:25.328375014 +0000 UTC m=+0.027573542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181bbc9b2c64a63ba33d684cd87295737d3bc343343e321091bffa5b15e38764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181bbc9b2c64a63ba33d684cd87295737d3bc343343e321091bffa5b15e38764/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181bbc9b2c64a63ba33d684cd87295737d3bc343343e321091bffa5b15e38764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:25 np0005592157 podman[75373]: 2026-01-22 13:32:25.445497748 +0000 UTC m=+0.144696266 container init 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:32:25 np0005592157 podman[75373]: 2026-01-22 13:32:25.450266726 +0000 UTC m=+0.149465234 container start 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:25 np0005592157 podman[75373]: 2026-01-22 13:32:25.453505606 +0000 UTC m=+0.152704114 container attach 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'alerts'
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:32:25 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'balancer'
Jan 22 08:32:25 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:25.778+0000 7fdfa9ad2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:32:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 08:32:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942593034' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]: {
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]:    "epoch": 5,
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]:    "available": true,
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]:    "active_name": "compute-0.nyayzk",
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]:    "num_standby": 0
Jan 22 08:32:26 np0005592157 relaxed_bartik[75412]: }
Jan 22 08:32:26 np0005592157 ceph-mgr[74655]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:32:26 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'cephadm'
Jan 22 08:32:26 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:26.031+0000 7fdfa9ad2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:32:26 np0005592157 systemd[1]: libpod-33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365.scope: Deactivated successfully.
Jan 22 08:32:26 np0005592157 podman[75373]: 2026-01-22 13:32:26.040368036 +0000 UTC m=+0.739566574 container died 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-181bbc9b2c64a63ba33d684cd87295737d3bc343343e321091bffa5b15e38764-merged.mount: Deactivated successfully.
Jan 22 08:32:26 np0005592157 podman[75373]: 2026-01-22 13:32:26.092436762 +0000 UTC m=+0.791635290 container remove 33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365 (image=quay.io/ceph/ceph:v18, name=relaxed_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:26 np0005592157 systemd[1]: libpod-conmon-33c3f2864b44264a0bd99835836c346fcb0be0c073642f00162c6628dbc79365.scope: Deactivated successfully.
Jan 22 08:32:26 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3185909986' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 22 08:32:26 np0005592157 podman[75451]: 2026-01-22 13:32:26.197123709 +0000 UTC m=+0.072687347 container create c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:32:26 np0005592157 systemd[1]: Started libpod-conmon-c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163.scope.
Jan 22 08:32:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed8688ab8c8ca8f12c7bc100235a9879447d33bf7412506a5925670d3aff2ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed8688ab8c8ca8f12c7bc100235a9879447d33bf7412506a5925670d3aff2ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed8688ab8c8ca8f12c7bc100235a9879447d33bf7412506a5925670d3aff2ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:26 np0005592157 podman[75451]: 2026-01-22 13:32:26.169005644 +0000 UTC m=+0.044569372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:26 np0005592157 podman[75451]: 2026-01-22 13:32:26.278763916 +0000 UTC m=+0.154327614 container init c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:26 np0005592157 podman[75451]: 2026-01-22 13:32:26.287716297 +0000 UTC m=+0.163279975 container start c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:32:26 np0005592157 podman[75451]: 2026-01-22 13:32:26.294014612 +0000 UTC m=+0.169578330 container attach c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:32:27 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'crash'
Jan 22 08:32:28 np0005592157 ceph-mgr[74655]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:32:28 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'dashboard'
Jan 22 08:32:28 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:28.262+0000 7fdfa9ad2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:32:29 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'devicehealth'
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:30.001+0000 7fdfa9ad2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  from numpy import show_config as show_numpy_config
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:30.598+0000 7fdfa9ad2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'influx'
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:32:30 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'insights'
Jan 22 08:32:30 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:30.885+0000 7fdfa9ad2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:32:31 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'iostat'
Jan 22 08:32:31 np0005592157 ceph-mgr[74655]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:32:31 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:31.406+0000 7fdfa9ad2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:32:31 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'k8sevents'
Jan 22 08:32:33 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'localpool'
Jan 22 08:32:33 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 08:32:34 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'mirroring'
Jan 22 08:32:34 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'nfs'
Jan 22 08:32:35 np0005592157 ceph-mgr[74655]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:32:35 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'orchestrator'
Jan 22 08:32:35 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:35.213+0000 7fdfa9ad2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:32:35 np0005592157 ceph-mgr[74655]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:35 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 08:32:35 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:35.914+0000 7fdfa9ad2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'osd_support'
Jan 22 08:32:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:36.203+0000 7fdfa9ad2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 08:32:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:36.447+0000 7fdfa9ad2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'progress'
Jan 22 08:32:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:36.736+0000 7fdfa9ad2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:32:36 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'prometheus'
Jan 22 08:32:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:36.996+0000 7fdfa9ad2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:32:38 np0005592157 ceph-mgr[74655]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:32:38 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rbd_support'
Jan 22 08:32:38 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:38.048+0000 7fdfa9ad2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:32:38 np0005592157 ceph-mgr[74655]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:32:38 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:38.381+0000 7fdfa9ad2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:32:38 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'restful'
Jan 22 08:32:39 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rgw'
Jan 22 08:32:40 np0005592157 ceph-mgr[74655]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:32:40 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'rook'
Jan 22 08:32:40 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:40.068+0000 7fdfa9ad2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'selftest'
Jan 22 08:32:42 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:42.397+0000 7fdfa9ad2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:42.661+0000 7fdfa9ad2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'snap_schedule'
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:32:42 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'stats'
Jan 22 08:32:42 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:42.922+0000 7fdfa9ad2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:32:43 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'status'
Jan 22 08:32:43 np0005592157 ceph-mgr[74655]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:32:43 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'telegraf'
Jan 22 08:32:43 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:43.469+0000 7fdfa9ad2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:32:43 np0005592157 ceph-mgr[74655]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:32:43 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'telemetry'
Jan 22 08:32:43 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:43.743+0000 7fdfa9ad2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:32:44 np0005592157 ceph-mgr[74655]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:32:44 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 08:32:44 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:44.453+0000 7fdfa9ad2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:32:45 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:45.189+0000 7fdfa9ad2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:45 np0005592157 ceph-mgr[74655]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:32:45 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'volumes'
Jan 22 08:32:45 np0005592157 ceph-mgr[74655]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:32:45 np0005592157 ceph-mgr[74655]: mgr[py] Loading python module 'zabbix'
Jan 22 08:32:45 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:45.965+0000 7fdfa9ad2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:32:46 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:32:46.216+0000 7fdfa9ad2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nyayzk restarted
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nyayzk
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: ms_deliver_dispatch: unhandled message 0x55be24f9a420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr handle_mgr_map Activating!
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr handle_mgr_map I am now activating
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.nyayzk(active, starting, since 0.0242294s)
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nyayzk", "id": "compute-0.nyayzk"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nyayzk", "id": "compute-0.nyayzk"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e1 all = 1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Manager daemon compute-0.nyayzk is now available
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: balancer
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:32:46
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] No pools available
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: Active manager daemon compute-0.nyayzk restarted
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: Activating manager daemon compute-0.nyayzk
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: Manager daemon compute-0.nyayzk is now available
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: cephadm
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: crash
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: devicehealth
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: iostat
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: nfs
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: orchestrator
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: pg_autoscaler
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: progress
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [progress INFO root] Loading...
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [progress INFO root] No stored events to load
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [progress INFO root] Loaded [] historic events
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] recovery thread starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] starting setup
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: rbd_support
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: restful
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: status
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] PerfHandler: starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [restful WARNING root] server not running: no certificate configured
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: telemetry
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TaskHandler: starting
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"} v 0) v1
Jan 22 08:32:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"}]: dispatch
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] setup complete
Jan 22 08:32:46 np0005592157 ceph-mgr[74655]: mgr load Constructed class from module: volumes
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019930302 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.nyayzk(active, since 1.0307s)
Jan 22 08:32:47 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 22 08:32:47 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 musing_matsumoto[75467]: {
Jan 22 08:32:47 np0005592157 musing_matsumoto[75467]:    "mgrmap_epoch": 7,
Jan 22 08:32:47 np0005592157 musing_matsumoto[75467]:    "initialized": true
Jan 22 08:32:47 np0005592157 musing_matsumoto[75467]: }
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 systemd[1]: libpod-c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163.scope: Deactivated successfully.
Jan 22 08:32:47 np0005592157 podman[75451]: 2026-01-22 13:32:47.30058754 +0000 UTC m=+21.176151208 container died c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: Found migration_current of "None". Setting to last migration.
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/mirror_snapshot_schedule"}]: dispatch
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nyayzk/trash_purge_schedule"}]: dispatch
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5ed8688ab8c8ca8f12c7bc100235a9879447d33bf7412506a5925670d3aff2ed-merged.mount: Deactivated successfully.
Jan 22 08:32:47 np0005592157 podman[75451]: 2026-01-22 13:32:47.347728564 +0000 UTC m=+21.223292202 container remove c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163 (image=quay.io/ceph/ceph:v18, name=musing_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:32:47 np0005592157 systemd[1]: libpod-conmon-c4242e98538369793ca7bfc4d17ab85a98727f49c2934ccb9c2f61ec0e237163.scope: Deactivated successfully.
Jan 22 08:32:47 np0005592157 podman[75627]: 2026-01-22 13:32:47.416568045 +0000 UTC m=+0.044919731 container create 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:32:47 np0005592157 systemd[1]: Started libpod-conmon-4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb.scope.
Jan 22 08:32:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065e090f603fb87ef38666a1de9ebc346cb350d0276f12eb1d09fc2e21811929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065e090f603fb87ef38666a1de9ebc346cb350d0276f12eb1d09fc2e21811929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065e090f603fb87ef38666a1de9ebc346cb350d0276f12eb1d09fc2e21811929/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:47 np0005592157 podman[75627]: 2026-01-22 13:32:47.394867179 +0000 UTC m=+0.023218885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:47 np0005592157 podman[75627]: 2026-01-22 13:32:47.499005282 +0000 UTC m=+0.127356998 container init 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 08:32:47 np0005592157 podman[75627]: 2026-01-22 13:32:47.504154899 +0000 UTC m=+0.132506605 container start 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:47 np0005592157 podman[75627]: 2026-01-22 13:32:47.510977308 +0000 UTC m=+0.139329034 container attach 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 22 08:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:32:48 np0005592157 systemd[1]: libpod-4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb.scope: Deactivated successfully.
Jan 22 08:32:48 np0005592157 podman[75627]: 2026-01-22 13:32:48.165269433 +0000 UTC m=+0.793621149 container died 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 08:32:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-065e090f603fb87ef38666a1de9ebc346cb350d0276f12eb1d09fc2e21811929-merged.mount: Deactivated successfully.
Jan 22 08:32:48 np0005592157 podman[75627]: 2026-01-22 13:32:48.209231319 +0000 UTC m=+0.837583005 container remove 4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb (image=quay.io/ceph/ceph:v18, name=sleepy_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:48 np0005592157 systemd[1]: libpod-conmon-4e7a94722e6f841f7acc271c01bd28963934a74894b4e105f7da752f6057ebeb.scope: Deactivated successfully.
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:48 np0005592157 podman[75683]: 2026-01-22 13:32:48.2817458 +0000 UTC m=+0.050660522 container create 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:32:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:48 np0005592157 systemd[1]: Started libpod-conmon-3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6.scope.
Jan 22 08:32:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ade791730e0b15e87cc870e90b5579c07c939a070b04342a4cc9ad35a814a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ade791730e0b15e87cc870e90b5579c07c939a070b04342a4cc9ad35a814a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ade791730e0b15e87cc870e90b5579c07c939a070b04342a4cc9ad35a814a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:48 np0005592157 podman[75683]: 2026-01-22 13:32:48.343384883 +0000 UTC m=+0.112299685 container init 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:32:48 np0005592157 podman[75683]: 2026-01-22 13:32:48.351951755 +0000 UTC m=+0.120866487 container start 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:48 np0005592157 podman[75683]: 2026-01-22 13:32:48.258145287 +0000 UTC m=+0.027060099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:48 np0005592157 podman[75683]: 2026-01-22 13:32:48.35700751 +0000 UTC m=+0.125922242 container attach 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: [cephadm INFO cherrypy.error] [22/Jan/2026:13:32:48] ENGINE Bus STARTING
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : [22/Jan/2026:13:32:48] ENGINE Bus STARTING
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: [cephadm INFO cherrypy.error] [22/Jan/2026:13:32:48] ENGINE Serving on http://192.168.122.100:8765
Jan 22 08:32:48 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : [22/Jan/2026:13:32:48] ENGINE Serving on http://192.168.122.100:8765
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO cherrypy.error] [22/Jan/2026:13:32:49] ENGINE Serving on https://192.168.122.100:7150
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : [22/Jan/2026:13:32:49] ENGINE Serving on https://192.168.122.100:7150
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO cherrypy.error] [22/Jan/2026:13:32:49] ENGINE Bus STARTED
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : [22/Jan/2026:13:32:49] ENGINE Bus STARTED
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO cherrypy.error] [22/Jan/2026:13:32:49] ENGINE Client ('192.168.122.100', 33130) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : [22/Jan/2026:13:32:49] ENGINE Client ('192.168.122.100', 33130) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Set ssh ssh_user
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Set ssh ssh_config
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 22 08:32:49 np0005592157 keen_carver[75699]: ssh user set to ceph-admin. sudo will be used
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.nyayzk(active, since 2s)
Jan 22 08:32:49 np0005592157 systemd[1]: libpod-3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6.scope: Deactivated successfully.
Jan 22 08:32:49 np0005592157 podman[75683]: 2026-01-22 13:32:49.154523574 +0000 UTC m=+0.923438306 container died 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-59ade791730e0b15e87cc870e90b5579c07c939a070b04342a4cc9ad35a814a6-merged.mount: Deactivated successfully.
Jan 22 08:32:49 np0005592157 podman[75683]: 2026-01-22 13:32:49.195356323 +0000 UTC m=+0.964271095 container remove 3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6 (image=quay.io/ceph/ceph:v18, name=keen_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:32:49 np0005592157 systemd[1]: libpod-conmon-3fafa394ca0f81c0d15a08880ddee0721be5fe139e8fb85579bf869f3a96daf6.scope: Deactivated successfully.
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.256890373 +0000 UTC m=+0.039992019 container create 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:49 np0005592157 systemd[1]: Started libpod-conmon-505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de.scope.
Jan 22 08:32:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.330256976 +0000 UTC m=+0.113358642 container init 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.237634888 +0000 UTC m=+0.020736564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.336945681 +0000 UTC m=+0.120047337 container start 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.340848948 +0000 UTC m=+0.123950614 container attach 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 22 08:32:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Set ssh private key
Jan 22 08:32:49 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 22 08:32:49 np0005592157 systemd[1]: libpod-505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de.scope: Deactivated successfully.
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.914521481 +0000 UTC m=+0.697623127 container died 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3757192aab8abd0ea801ba9adaacb483ffb8724735c084250d0fe165c7ba6e88-merged.mount: Deactivated successfully.
Jan 22 08:32:49 np0005592157 podman[75759]: 2026-01-22 13:32:49.966638769 +0000 UTC m=+0.749740445 container remove 505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de (image=quay.io/ceph/ceph:v18, name=agitated_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 08:32:49 np0005592157 systemd[1]: libpod-conmon-505362b3eff6e875e28affb5ad8f73eeb4dc8d1ca983d50ee0e55c00155aa4de.scope: Deactivated successfully.
Jan 22 08:32:50 np0005592157 podman[75813]: 2026-01-22 13:32:50.013476736 +0000 UTC m=+0.027052609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:50 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:50 np0005592157 podman[75813]: 2026-01-22 13:32:50.328602112 +0000 UTC m=+0.342177935 container create 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: [22/Jan/2026:13:32:48] ENGINE Bus STARTING
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: [22/Jan/2026:13:32:48] ENGINE Serving on http://192.168.122.100:8765
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: [22/Jan/2026:13:32:49] ENGINE Serving on https://192.168.122.100:7150
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: [22/Jan/2026:13:32:49] ENGINE Bus STARTED
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: [22/Jan/2026:13:32:49] ENGINE Client ('192.168.122.100', 33130) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: Set ssh ssh_user
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: Set ssh ssh_config
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: ssh user set to ceph-admin. sudo will be used
Jan 22 08:32:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:50 np0005592157 systemd[1]: Started libpod-conmon-27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73.scope.
Jan 22 08:32:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:50 np0005592157 podman[75813]: 2026-01-22 13:32:50.991231744 +0000 UTC m=+1.004807627 container init 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:32:51 np0005592157 podman[75813]: 2026-01-22 13:32:51.000972705 +0000 UTC m=+1.014548498 container start 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:51 np0005592157 podman[75813]: 2026-01-22 13:32:51.006615654 +0000 UTC m=+1.020191477 container attach 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 22 08:32:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:51 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 22 08:32:51 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 22 08:32:51 np0005592157 systemd[1]: libpod-27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73.scope: Deactivated successfully.
Jan 22 08:32:51 np0005592157 podman[75813]: 2026-01-22 13:32:51.684400249 +0000 UTC m=+1.697976042 container died 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 08:32:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-aad9a7f8d87045b3b8f70e59b3d3b419a74e9c58edcc3bcb72ff0ff2bdd4b871-merged.mount: Deactivated successfully.
Jan 22 08:32:51 np0005592157 podman[75813]: 2026-01-22 13:32:51.741224513 +0000 UTC m=+1.754800306 container remove 27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73 (image=quay.io/ceph/ceph:v18, name=affectionate_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:51 np0005592157 systemd[1]: libpod-conmon-27f5431898c66d5fd7bfa29789059189bcef554ff389c32f5efdaeda07f96f73.scope: Deactivated successfully.
Jan 22 08:32:51 np0005592157 podman[75867]: 2026-01-22 13:32:51.797310609 +0000 UTC m=+0.030246228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:51 np0005592157 ceph-mon[74359]: Set ssh ssh_identity_key
Jan 22 08:32:51 np0005592157 ceph-mon[74359]: Set ssh private key
Jan 22 08:32:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:51 np0005592157 podman[75867]: 2026-01-22 13:32:51.928589843 +0000 UTC m=+0.161525482 container create eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:32:51 np0005592157 systemd[1]: Started libpod-conmon-eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a.scope.
Jan 22 08:32:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73961603f20e5b3bc20393f97ad99628d216806f1abff096c77d5d5134929f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73961603f20e5b3bc20393f97ad99628d216806f1abff096c77d5d5134929f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73961603f20e5b3bc20393f97ad99628d216806f1abff096c77d5d5134929f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053147 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:32:52 np0005592157 podman[75867]: 2026-01-22 13:32:52.094976033 +0000 UTC m=+0.327911672 container init eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:32:52 np0005592157 podman[75867]: 2026-01-22 13:32:52.101904315 +0000 UTC m=+0.334839924 container start eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:52 np0005592157 podman[75867]: 2026-01-22 13:32:52.106388435 +0000 UTC m=+0.339324044 container attach eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:32:52 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:52 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:52 np0005592157 nifty_germain[75883]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIr04Vjhi6uGng41PjP62qRZCHHhuPc0NCrQhhxGMoTCMCg+OZi1quhrS8r2Rg3DUwLcYU+lmIlSl6T6fVnMOuyDQ+aatseAZXeYaG9NvADnt8XIgkE8w+UzPE+BCQR6SUFn9Dh+/Ee9do0rk0Y/NCZW4KeX+tp89T0YQR4NBXrKwNeDP3vg+v4omhIwXtNJA8kW8TKnsfbfj7/j0bwa9bse/VI8ykjX02wocpsxurHECkqM5k/H7fvya3aDpKpuuxZYFFVPL9wE9B4UfnO/3a85TAq0l3DZjyaXzNLe5V7/dHIBC44ms+wjEJLga+VXoYGzWHlFCgm/dIzQoq1c5JX0iphiRFx9IriQDDPfhrat/ZnbOgdyNpfTwAD8jN1CvgsIPd0JE6NuLw+r3s+FFYOVDtfhrunWyJ07jfhuUScnkLltIGjKrUY/0vmpJVlpEPIHRlYH/fHy7n9Znb8jy5xqEJaOGkWgS2CWGAx12ORIJ9obdjv3740NmLcpifWj0= zuul@controller
Jan 22 08:32:52 np0005592157 systemd[1]: libpod-eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a.scope: Deactivated successfully.
Jan 22 08:32:52 np0005592157 podman[75867]: 2026-01-22 13:32:52.665726025 +0000 UTC m=+0.898661664 container died eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fe73961603f20e5b3bc20393f97ad99628d216806f1abff096c77d5d5134929f-merged.mount: Deactivated successfully.
Jan 22 08:32:52 np0005592157 podman[75867]: 2026-01-22 13:32:52.882834279 +0000 UTC m=+1.115769878 container remove eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a (image=quay.io/ceph/ceph:v18, name=nifty_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:32:52 np0005592157 systemd[1]: libpod-conmon-eae7f585a9a2dfad204f3ffe72d4bf32590484e197fe6afcafb06b9b8104674a.scope: Deactivated successfully.
Jan 22 08:32:52 np0005592157 ceph-mon[74359]: Set ssh ssh_identity_pub
Jan 22 08:32:52 np0005592157 podman[75923]: 2026-01-22 13:32:52.972883734 +0000 UTC m=+0.060719881 container create a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:53 np0005592157 systemd[1]: Started libpod-conmon-a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099.scope.
Jan 22 08:32:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8712e0f99b16d104507b52abdf5d45374ef25d5564ee91d1b104b2010577536/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8712e0f99b16d104507b52abdf5d45374ef25d5564ee91d1b104b2010577536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8712e0f99b16d104507b52abdf5d45374ef25d5564ee91d1b104b2010577536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:53 np0005592157 podman[75923]: 2026-01-22 13:32:52.950988393 +0000 UTC m=+0.038824580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:53 np0005592157 podman[75923]: 2026-01-22 13:32:53.061575955 +0000 UTC m=+0.149412132 container init a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:32:53 np0005592157 podman[75923]: 2026-01-22 13:32:53.072776882 +0000 UTC m=+0.160613049 container start a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:53 np0005592157 podman[75923]: 2026-01-22 13:32:53.076130965 +0000 UTC m=+0.163967132 container attach a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:53 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:53 np0005592157 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 08:32:53 np0005592157 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 08:32:53 np0005592157 systemd-logind[785]: New session 21 of user ceph-admin.
Jan 22 08:32:53 np0005592157 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 08:32:53 np0005592157 systemd[1]: Starting User Manager for UID 42477...
Jan 22 08:32:54 np0005592157 systemd[75969]: Queued start job for default target Main User Target.
Jan 22 08:32:54 np0005592157 systemd[75969]: Created slice User Application Slice.
Jan 22 08:32:54 np0005592157 systemd[75969]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 08:32:54 np0005592157 systemd[75969]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:32:54 np0005592157 systemd[75969]: Reached target Paths.
Jan 22 08:32:54 np0005592157 systemd[75969]: Reached target Timers.
Jan 22 08:32:54 np0005592157 systemd[75969]: Starting D-Bus User Message Bus Socket...
Jan 22 08:32:54 np0005592157 systemd[75969]: Starting Create User's Volatile Files and Directories...
Jan 22 08:32:54 np0005592157 systemd-logind[785]: New session 23 of user ceph-admin.
Jan 22 08:32:54 np0005592157 systemd[75969]: Finished Create User's Volatile Files and Directories.
Jan 22 08:32:54 np0005592157 systemd[75969]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:32:54 np0005592157 systemd[75969]: Reached target Sockets.
Jan 22 08:32:54 np0005592157 systemd[75969]: Reached target Basic System.
Jan 22 08:32:54 np0005592157 systemd[75969]: Reached target Main User Target.
Jan 22 08:32:54 np0005592157 systemd[75969]: Startup finished in 115ms.
Jan 22 08:32:54 np0005592157 systemd[1]: Started User Manager for UID 42477.
Jan 22 08:32:54 np0005592157 systemd[1]: Started Session 21 of User ceph-admin.
Jan 22 08:32:54 np0005592157 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 08:32:54 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:54 np0005592157 systemd-logind[785]: New session 24 of user ceph-admin.
Jan 22 08:32:54 np0005592157 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 08:32:54 np0005592157 systemd-logind[785]: New session 25 of user ceph-admin.
Jan 22 08:32:54 np0005592157 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 08:32:55 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 22 08:32:55 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 22 08:32:55 np0005592157 systemd-logind[785]: New session 26 of user ceph-admin.
Jan 22 08:32:55 np0005592157 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 08:32:55 np0005592157 systemd-logind[785]: New session 27 of user ceph-admin.
Jan 22 08:32:55 np0005592157 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 08:32:55 np0005592157 ceph-mon[74359]: Deploying cephadm binary to compute-0
Jan 22 08:32:56 np0005592157 systemd-logind[785]: New session 28 of user ceph-admin.
Jan 22 08:32:56 np0005592157 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 08:32:56 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:56 np0005592157 systemd-logind[785]: New session 29 of user ceph-admin.
Jan 22 08:32:56 np0005592157 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 08:32:56 np0005592157 systemd-logind[785]: New session 30 of user ceph-admin.
Jan 22 08:32:56 np0005592157 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 08:32:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:32:57 np0005592157 systemd-logind[785]: New session 31 of user ceph-admin.
Jan 22 08:32:57 np0005592157 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 08:32:57 np0005592157 systemd-logind[785]: New session 32 of user ceph-admin.
Jan 22 08:32:57 np0005592157 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 08:32:58 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:32:58 np0005592157 systemd-logind[785]: New session 33 of user ceph-admin.
Jan 22 08:32:58 np0005592157 systemd[1]: Started Session 33 of User ceph-admin.
Jan 22 08:32:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:32:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:58 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Added host compute-0
Jan 22 08:32:58 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 22 08:32:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:32:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:32:58 np0005592157 sharp_hawking[75939]: Added host 'compute-0' with addr '192.168.122.100'
Jan 22 08:32:58 np0005592157 systemd[1]: libpod-a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099.scope: Deactivated successfully.
Jan 22 08:32:58 np0005592157 podman[75923]: 2026-01-22 13:32:58.88035414 +0000 UTC m=+5.968190327 container died a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:32:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c8712e0f99b16d104507b52abdf5d45374ef25d5564ee91d1b104b2010577536-merged.mount: Deactivated successfully.
Jan 22 08:32:58 np0005592157 podman[75923]: 2026-01-22 13:32:58.937640104 +0000 UTC m=+6.025476261 container remove a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:32:58 np0005592157 systemd[1]: libpod-conmon-a278852d28a1eb752f109a17365b8874879802cfa4d3a01e3a079a979e0e7099.scope: Deactivated successfully.
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.014021065 +0000 UTC m=+0.047260179 container create 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:32:59 np0005592157 systemd[1]: Started libpod-conmon-8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88.scope.
Jan 22 08:32:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c659d643227424bbffc608296224160e77eb9f120cf815a9ee24f0e07419fc2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c659d643227424bbffc608296224160e77eb9f120cf815a9ee24f0e07419fc2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c659d643227424bbffc608296224160e77eb9f120cf815a9ee24f0e07419fc2f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.088258565 +0000 UTC m=+0.121497669 container init 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:58.999261434 +0000 UTC m=+0.032500568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.096158648 +0000 UTC m=+0.129397762 container start 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.099196253 +0000 UTC m=+0.132435367 container attach 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.409528607 +0000 UTC m=+0.051072712 container create 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:32:59 np0005592157 systemd[1]: Started libpod-conmon-2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431.scope.
Jan 22 08:32:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.387337544 +0000 UTC m=+0.028881619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.488762129 +0000 UTC m=+0.130306224 container init 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.495877343 +0000 UTC m=+0.137421418 container start 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.499636906 +0000 UTC m=+0.141181011 container attach 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:32:59 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:32:59 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 22 08:32:59 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:59 np0005592157 agitated_rosalind[76674]: Scheduled mon update...
Jan 22 08:32:59 np0005592157 systemd[1]: libpod-8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88.scope: Deactivated successfully.
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.66795119 +0000 UTC m=+0.701190314 container died 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:32:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c659d643227424bbffc608296224160e77eb9f120cf815a9ee24f0e07419fc2f-merged.mount: Deactivated successfully.
Jan 22 08:32:59 np0005592157 podman[76626]: 2026-01-22 13:32:59.717428393 +0000 UTC m=+0.750667527 container remove 8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88 (image=quay.io/ceph/ceph:v18, name=agitated_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:32:59 np0005592157 systemd[1]: libpod-conmon-8d79e60455cf0b2ff5ded2c725cf26989a220b1e934b9f535b683e0377e54f88.scope: Deactivated successfully.
Jan 22 08:32:59 np0005592157 podman[76786]: 2026-01-22 13:32:59.782401515 +0000 UTC m=+0.044949033 container create 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:32:59 np0005592157 systemd[1]: Started libpod-conmon-80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f.scope.
Jan 22 08:32:59 np0005592157 epic_heyrovsky[76765]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 22 08:32:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb053c292f0941db1ff2c86889e0a68477666e0c4ce95736e59665f7b1377a13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb053c292f0941db1ff2c86889e0a68477666e0c4ce95736e59665f7b1377a13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb053c292f0941db1ff2c86889e0a68477666e0c4ce95736e59665f7b1377a13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: Added host compute-0
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:32:59 np0005592157 systemd[1]: libpod-2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431.scope: Deactivated successfully.
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.849098899 +0000 UTC m=+0.490642964 container died 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:32:59 np0005592157 podman[76786]: 2026-01-22 13:32:59.760456597 +0000 UTC m=+0.023004155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:32:59 np0005592157 podman[76786]: 2026-01-22 13:32:59.862576639 +0000 UTC m=+0.125124147 container init 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 08:32:59 np0005592157 podman[76786]: 2026-01-22 13:32:59.867870149 +0000 UTC m=+0.130417657 container start 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:32:59 np0005592157 podman[76786]: 2026-01-22 13:32:59.871341764 +0000 UTC m=+0.133889392 container attach 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:32:59 np0005592157 podman[76730]: 2026-01-22 13:32:59.895282941 +0000 UTC m=+0.536827006 container remove 2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431 (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 08:32:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-acab61ff21a754faed115442daf56ce2bdf29692eb79c6d3944bbe8d668f29aa-merged.mount: Deactivated successfully.
Jan 22 08:32:59 np0005592157 systemd[1]: libpod-conmon-2f22d34c478f7d35fa0abbe2445d222c5b078d8769b474946c0609b80ead6431.scope: Deactivated successfully.
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 22 08:32:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:00 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:33:00 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:33:00 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 22 08:33:00 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:00 np0005592157 priceless_haslett[76803]: Scheduled mgr update...
Jan 22 08:33:00 np0005592157 systemd[1]: libpod-80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f.scope: Deactivated successfully.
Jan 22 08:33:00 np0005592157 podman[76786]: 2026-01-22 13:33:00.454525765 +0000 UTC m=+0.717073273 container died 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-eb053c292f0941db1ff2c86889e0a68477666e0c4ce95736e59665f7b1377a13-merged.mount: Deactivated successfully.
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:00 np0005592157 podman[76786]: 2026-01-22 13:33:00.508062757 +0000 UTC m=+0.770610275 container remove 80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f (image=quay.io/ceph/ceph:v18, name=priceless_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:33:00 np0005592157 systemd[1]: libpod-conmon-80dee238fb908798599c046135ddd44c34cf1c471965c9a9addd3e056fbbf53f.scope: Deactivated successfully.
Jan 22 08:33:00 np0005592157 podman[76995]: 2026-01-22 13:33:00.571463141 +0000 UTC m=+0.044264106 container create a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:00 np0005592157 systemd[1]: Started libpod-conmon-a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994.scope.
Jan 22 08:33:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b796a81876ba925958ef065ca80e4922c7c955aa507fe8300d20f973479b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b796a81876ba925958ef065ca80e4922c7c955aa507fe8300d20f973479b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b796a81876ba925958ef065ca80e4922c7c955aa507fe8300d20f973479b1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:00 np0005592157 podman[76995]: 2026-01-22 13:33:00.550160829 +0000 UTC m=+0.022961834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:00 np0005592157 podman[76995]: 2026-01-22 13:33:00.65139499 +0000 UTC m=+0.124195975 container init a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:00 np0005592157 podman[76995]: 2026-01-22 13:33:00.657016477 +0000 UTC m=+0.129817442 container start a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:33:00 np0005592157 podman[76995]: 2026-01-22 13:33:00.660076372 +0000 UTC m=+0.132877347 container attach a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: Saving service mon spec with placement count:5
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:01 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:33:01 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service crash spec with placement *
Jan 22 08:33:01 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 22 08:33:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:33:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:01 np0005592157 thirsty_fermi[77042]: Scheduled crash update...
Jan 22 08:33:01 np0005592157 systemd[1]: libpod-a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994.scope: Deactivated successfully.
Jan 22 08:33:01 np0005592157 podman[76995]: 2026-01-22 13:33:01.218799104 +0000 UTC m=+0.691600059 container died a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f9b796a81876ba925958ef065ca80e4922c7c955aa507fe8300d20f973479b1a-merged.mount: Deactivated successfully.
Jan 22 08:33:01 np0005592157 podman[76995]: 2026-01-22 13:33:01.269789183 +0000 UTC m=+0.742590148 container remove a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994 (image=quay.io/ceph/ceph:v18, name=thirsty_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:01 np0005592157 systemd[1]: libpod-conmon-a2f4d8d406c569c2021e4fbbf5c854a63c40089fb88b6c9d2cfc332876f4e994.scope: Deactivated successfully.
Jan 22 08:33:01 np0005592157 podman[77193]: 2026-01-22 13:33:01.326865152 +0000 UTC m=+0.097988472 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:01 np0005592157 podman[77215]: 2026-01-22 13:33:01.340839495 +0000 UTC m=+0.049122125 container create baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:33:01 np0005592157 systemd[1]: Started libpod-conmon-baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250.scope.
Jan 22 08:33:01 np0005592157 podman[77215]: 2026-01-22 13:33:01.316694733 +0000 UTC m=+0.024977453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347e8364e554351e7ff1d381916b421501b8ed6b8a45cb643470169c994731e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347e8364e554351e7ff1d381916b421501b8ed6b8a45cb643470169c994731e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347e8364e554351e7ff1d381916b421501b8ed6b8a45cb643470169c994731e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:01 np0005592157 podman[77215]: 2026-01-22 13:33:01.440284782 +0000 UTC m=+0.148567452 container init baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:33:01 np0005592157 podman[77215]: 2026-01-22 13:33:01.452362037 +0000 UTC m=+0.160644667 container start baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:33:01 np0005592157 podman[77215]: 2026-01-22 13:33:01.456368436 +0000 UTC m=+0.164651096 container attach baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:33:01 np0005592157 podman[77193]: 2026-01-22 13:33:01.616382377 +0000 UTC m=+0.387505677 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2115536812' entity='client.admin' 
Jan 22 08:33:02 np0005592157 systemd[1]: libpod-baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250.scope: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77215]: 2026-01-22 13:33:02.022841917 +0000 UTC m=+0.731124547 container died baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6347e8364e554351e7ff1d381916b421501b8ed6b8a45cb643470169c994731e-merged.mount: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77215]: 2026-01-22 13:33:02.064914208 +0000 UTC m=+0.773196838 container remove baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250 (image=quay.io/ceph/ceph:v18, name=eager_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:33:02 np0005592157 systemd[1]: libpod-conmon-baba7021b9c2ef3db80261d20a61a71de49ca4c8e1ee586ecbae9fb4be2ff250.scope: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.129567913 +0000 UTC m=+0.044051591 container create 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:02 np0005592157 systemd[1]: Started libpod-conmon-3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357.scope.
Jan 22 08:33:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:02 np0005592157 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77438 (sysctl)
Jan 22 08:33:02 np0005592157 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 08:33:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe34d71d12e60fa505fb32c5de107d76698eb574de3323af0b09f0be3e2a81/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe34d71d12e60fa505fb32c5de107d76698eb574de3323af0b09f0be3e2a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe34d71d12e60fa505fb32c5de107d76698eb574de3323af0b09f0be3e2a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: Saving service mgr spec with placement count:2
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: Saving service crash spec with placement *
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2115536812' entity='client.admin' 
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.108126847 +0000 UTC m=+0.022610555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.21439659 +0000 UTC m=+0.128880268 container init 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:33:02 np0005592157 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.230895935 +0000 UTC m=+0.145379613 container start 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.236905232 +0000 UTC m=+0.151388980 container attach 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:33:02 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:33:02 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 22 08:33:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:02 np0005592157 systemd[1]: libpod-3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357.scope: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.808015187 +0000 UTC m=+0.722498865 container died 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:33:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2bbe34d71d12e60fa505fb32c5de107d76698eb574de3323af0b09f0be3e2a81-merged.mount: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77407]: 2026-01-22 13:33:02.860548924 +0000 UTC m=+0.775032602 container remove 3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357 (image=quay.io/ceph/ceph:v18, name=sweet_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:33:02 np0005592157 systemd[1]: libpod-conmon-3cfe40d85347b371558bc93884a34c21a5921ddcaae0c4fecccdabfc75a89357.scope: Deactivated successfully.
Jan 22 08:33:02 np0005592157 podman[77595]: 2026-01-22 13:33:02.9391262 +0000 UTC m=+0.054934787 container create 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:02 np0005592157 systemd[1]: Started libpod-conmon-38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc.scope.
Jan 22 08:33:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49dfda968c66d12e686a7413990f0574199d20e48eaa4712d7d916c32c19192/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49dfda968c66d12e686a7413990f0574199d20e48eaa4712d7d916c32c19192/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49dfda968c66d12e686a7413990f0574199d20e48eaa4712d7d916c32c19192/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:02.912398865 +0000 UTC m=+0.028207462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:03.01830376 +0000 UTC m=+0.134112307 container init 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:03.024689337 +0000 UTC m=+0.140497884 container start 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:03.02768883 +0000 UTC m=+0.143497367 container attach 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:03 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:03 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Added label _admin to host compute-0
Jan 22 08:33:03 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 22 08:33:03 np0005592157 eager_jennings[77619]: Added label _admin to host compute-0
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.632784828 +0000 UTC m=+0.069529815 container create 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:03 np0005592157 systemd[1]: libpod-38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc.scope: Deactivated successfully.
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:03.654295675 +0000 UTC m=+0.770104222 container died 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:03 np0005592157 systemd[1]: Started libpod-conmon-9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f.scope.
Jan 22 08:33:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d49dfda968c66d12e686a7413990f0574199d20e48eaa4712d7d916c32c19192-merged.mount: Deactivated successfully.
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.604522566 +0000 UTC m=+0.041267593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:03 np0005592157 podman[77595]: 2026-01-22 13:33:03.71323874 +0000 UTC m=+0.829047297 container remove 38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.719653627 +0000 UTC m=+0.156398684 container init 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:03 np0005592157 systemd[1]: libpod-conmon-38bc84c81ae08c34fefdab4229283bdc85a653c73a55c180cddb4e01011cd2bc.scope: Deactivated successfully.
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.726370442 +0000 UTC m=+0.163115429 container start 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.731746693 +0000 UTC m=+0.168491680 container attach 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:03 np0005592157 beautiful_mcclintock[77818]: 167 167
Jan 22 08:33:03 np0005592157 systemd[1]: libpod-9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f.scope: Deactivated successfully.
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.733801264 +0000 UTC m=+0.170546281 container died 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-af0704dfee66ef38c5e7c3dc5e67ed1a3e8a6c6b131dccdfba7ce3560c4309a2-merged.mount: Deactivated successfully.
Jan 22 08:33:03 np0005592157 podman[77793]: 2026-01-22 13:33:03.776723806 +0000 UTC m=+0.213468793 container remove 9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:03 np0005592157 podman[77830]: 2026-01-22 13:33:03.794527542 +0000 UTC m=+0.054246080 container create 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:33:03 np0005592157 systemd[1]: libpod-conmon-9bd49faea9429481f22a4dee11177334d44576d49b771c4b63ca85289be3a65f.scope: Deactivated successfully.
Jan 22 08:33:03 np0005592157 systemd[1]: Started libpod-conmon-7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf.scope.
Jan 22 08:33:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400272a1def93a3d5ef31570eeec6a8191d51456a5a3857e4b2cc5a45e283c0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400272a1def93a3d5ef31570eeec6a8191d51456a5a3857e4b2cc5a45e283c0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400272a1def93a3d5ef31570eeec6a8191d51456a5a3857e4b2cc5a45e283c0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:03 np0005592157 podman[77830]: 2026-01-22 13:33:03.771221691 +0000 UTC m=+0.030940269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:03 np0005592157 podman[77830]: 2026-01-22 13:33:03.8817629 +0000 UTC m=+0.141481438 container init 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:03 np0005592157 podman[77830]: 2026-01-22 13:33:03.891672462 +0000 UTC m=+0.151391010 container start 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:33:03 np0005592157 podman[77830]: 2026-01-22 13:33:03.895744692 +0000 UTC m=+0.155463230 container attach 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 22 08:33:04 np0005592157 ceph-mgr[74655]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 08:33:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 22 08:33:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2493825701' entity='client.admin' 
Jan 22 08:33:04 np0005592157 systemd[1]: libpod-7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf.scope: Deactivated successfully.
Jan 22 08:33:04 np0005592157 podman[77830]: 2026-01-22 13:33:04.474049924 +0000 UTC m=+0.733768462 container died 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-400272a1def93a3d5ef31570eeec6a8191d51456a5a3857e4b2cc5a45e283c0e-merged.mount: Deactivated successfully.
Jan 22 08:33:04 np0005592157 podman[77830]: 2026-01-22 13:33:04.524830888 +0000 UTC m=+0.784549436 container remove 7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf (image=quay.io/ceph/ceph:v18, name=sharp_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:04 np0005592157 systemd[1]: libpod-conmon-7259dd083b4f7c6075437a1a7a60a9477a2c7d0f22bb19dcd667e138ac40cddf.scope: Deactivated successfully.
Jan 22 08:33:04 np0005592157 podman[77900]: 2026-01-22 13:33:04.590407325 +0000 UTC m=+0.043990619 container create a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:04 np0005592157 systemd[1]: Started libpod-conmon-a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe.scope.
Jan 22 08:33:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0bb7ee3d56fd723b95f818851ceb9ee09dca302958770393b11e67f73351e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0bb7ee3d56fd723b95f818851ceb9ee09dca302958770393b11e67f73351e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0bb7ee3d56fd723b95f818851ceb9ee09dca302958770393b11e67f73351e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:04 np0005592157 podman[77900]: 2026-01-22 13:33:04.664349887 +0000 UTC m=+0.117933251 container init a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:33:04 np0005592157 podman[77900]: 2026-01-22 13:33:04.572306232 +0000 UTC m=+0.025889536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:04 np0005592157 podman[77900]: 2026-01-22 13:33:04.67097937 +0000 UTC m=+0.124562664 container start a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:04 np0005592157 podman[77900]: 2026-01-22 13:33:04.674699641 +0000 UTC m=+0.128282945 container attach a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:33:04 np0005592157 ceph-mon[74359]: Added label _admin to host compute-0
Jan 22 08:33:04 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2493825701' entity='client.admin' 
Jan 22 08:33:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 22 08:33:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2413490500' entity='client.admin' 
Jan 22 08:33:05 np0005592157 eloquent_sinoussi[77916]: set mgr/dashboard/cluster/status
Jan 22 08:33:05 np0005592157 systemd[1]: libpod-a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe.scope: Deactivated successfully.
Jan 22 08:33:05 np0005592157 podman[77900]: 2026-01-22 13:33:05.489293903 +0000 UTC m=+0.942877207 container died a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8b0bb7ee3d56fd723b95f818851ceb9ee09dca302958770393b11e67f73351e4-merged.mount: Deactivated successfully.
Jan 22 08:33:06 np0005592157 podman[77900]: 2026-01-22 13:33:06.216353208 +0000 UTC m=+1.669936502 container remove a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe (image=quay.io/ceph/ceph:v18, name=eloquent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:06 np0005592157 ceph-mgr[74655]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 22 08:33:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 22 08:33:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:06 np0005592157 systemd[1]: libpod-conmon-a58ced26b1bd9bb570087d31b23c7bbadf4c8b9bda84e880a84a78a7098253fe.scope: Deactivated successfully.
Jan 22 08:33:06 np0005592157 podman[77961]: 2026-01-22 13:33:06.396465322 +0000 UTC m=+0.047949486 container create f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:06 np0005592157 systemd[1]: Started libpod-conmon-f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9.scope.
Jan 22 08:33:06 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2413490500' entity='client.admin' 
Jan 22 08:33:06 np0005592157 ceph-mon[74359]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 22 08:33:06 np0005592157 podman[77961]: 2026-01-22 13:33:06.377046116 +0000 UTC m=+0.028530290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f42cc1d2f7420e6a57a902ba186cdc2183e95520e06b3e012596627ad4f5ec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f42cc1d2f7420e6a57a902ba186cdc2183e95520e06b3e012596627ad4f5ec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f42cc1d2f7420e6a57a902ba186cdc2183e95520e06b3e012596627ad4f5ec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f42cc1d2f7420e6a57a902ba186cdc2183e95520e06b3e012596627ad4f5ec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 podman[77961]: 2026-01-22 13:33:06.506906949 +0000 UTC m=+0.158391133 container init f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:06 np0005592157 podman[77961]: 2026-01-22 13:33:06.519985939 +0000 UTC m=+0.171470093 container start f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:06 np0005592157 podman[77961]: 2026-01-22 13:33:06.523863254 +0000 UTC m=+0.175347488 container attach f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:33:06 np0005592157 python3[78008]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:06 np0005592157 podman[78009]: 2026-01-22 13:33:06.848761026 +0000 UTC m=+0.047002683 container create 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:06 np0005592157 systemd[1]: Started libpod-conmon-7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413.scope.
Jan 22 08:33:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f63a7ad97c3ab083efae72f647f1c1f39a8c148a812d97b98cf7b3517f6141/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f63a7ad97c3ab083efae72f647f1c1f39a8c148a812d97b98cf7b3517f6141/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:06 np0005592157 podman[78009]: 2026-01-22 13:33:06.911042982 +0000 UTC m=+0.109284659 container init 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:06 np0005592157 podman[78009]: 2026-01-22 13:33:06.823673971 +0000 UTC m=+0.021915638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:06 np0005592157 podman[78009]: 2026-01-22 13:33:06.920140945 +0000 UTC m=+0.118382592 container start 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:06 np0005592157 podman[78009]: 2026-01-22 13:33:06.927192348 +0000 UTC m=+0.125433995 container attach 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/226141799' entity='client.admin' 
Jan 22 08:33:07 np0005592157 systemd[1]: libpod-7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413.scope: Deactivated successfully.
Jan 22 08:33:07 np0005592157 podman[78009]: 2026-01-22 13:33:07.471415764 +0000 UTC m=+0.669657441 container died 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:33:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f0f63a7ad97c3ab083efae72f647f1c1f39a8c148a812d97b98cf7b3517f6141-merged.mount: Deactivated successfully.
Jan 22 08:33:07 np0005592157 podman[78009]: 2026-01-22 13:33:07.522867255 +0000 UTC m=+0.721108892 container remove 7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413 (image=quay.io/ceph/ceph:v18, name=friendly_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:07 np0005592157 systemd[1]: libpod-conmon-7698ece555c1db68d44c5a68eef0361e36c7551e5db9dbe1b345be117a4e5413.scope: Deactivated successfully.
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]: [
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:    {
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "available": false,
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "ceph_device": false,
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "lsm_data": {},
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "lvs": [],
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "path": "/dev/sr0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "rejected_reasons": [
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "Insufficient space (<5GB)",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "Has a FileSystem"
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        ],
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        "sys_api": {
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "actuators": null,
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "device_nodes": "sr0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "devname": "sr0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "human_readable_size": "482.00 KB",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "id_bus": "ata",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "model": "QEMU DVD-ROM",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "nr_requests": "2",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "parent": "/dev/sr0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "partitions": {},
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "path": "/dev/sr0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "removable": "1",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "rev": "2.5+",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "ro": "0",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "rotational": "1",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "sas_address": "",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "sas_device_handle": "",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "scheduler_mode": "mq-deadline",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "sectors": 0,
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "sectorsize": "2048",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "size": 493568.0,
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "support_discard": "2048",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "type": "disk",
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:            "vendor": "QEMU"
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:        }
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]:    }
Jan 22 08:33:07 np0005592157 determined_pasteur[77978]: ]
Jan 22 08:33:07 np0005592157 systemd[1]: libpod-f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9.scope: Deactivated successfully.
Jan 22 08:33:07 np0005592157 podman[77961]: 2026-01-22 13:33:07.744891426 +0000 UTC m=+1.396375570 container died f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:07 np0005592157 systemd[1]: libpod-f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9.scope: Consumed 1.218s CPU time.
Jan 22 08:33:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9f42cc1d2f7420e6a57a902ba186cdc2183e95520e06b3e012596627ad4f5ec7-merged.mount: Deactivated successfully.
Jan 22 08:33:07 np0005592157 podman[77961]: 2026-01-22 13:33:07.80341459 +0000 UTC m=+1.454898734 container remove f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:33:07 np0005592157 systemd[1]: libpod-conmon-f019c7cdba494504617fb7a01a65e99f6f1f0106ea13694fc51155d915fa53b9.scope: Deactivated successfully.
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:33:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:07 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:33:07 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:33:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/226141799' entity='client.admin' 
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:08 np0005592157 ceph-mon[74359]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:33:08 np0005592157 ansible-async_wrapper.py[79334]: Invoked with j312521060932 30 /home/zuul/.ansible/tmp/ansible-tmp-1769088787.9647996-37287-235943097712096/AnsiballZ_command.py _
Jan 22 08:33:08 np0005592157 ansible-async_wrapper.py[79397]: Starting module and watcher
Jan 22 08:33:08 np0005592157 ansible-async_wrapper.py[79397]: Start watching 79401 (30)
Jan 22 08:33:08 np0005592157 ansible-async_wrapper.py[79401]: Start module (79401)
Jan 22 08:33:08 np0005592157 ansible-async_wrapper.py[79334]: Return async_wrapper task started.
Jan 22 08:33:08 np0005592157 python3[79405]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:08 np0005592157 podman[79455]: 2026-01-22 13:33:08.889839733 +0000 UTC m=+0.061048677 container create 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:08 np0005592157 systemd[1]: Started libpod-conmon-0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10.scope.
Jan 22 08:33:09 np0005592157 podman[79455]: 2026-01-22 13:33:08.864584134 +0000 UTC m=+0.035793168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35ccaaf99b1887f601844ca3ad369b6a0e04432b7f08d1f74cb9b09b022cad7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35ccaaf99b1887f601844ca3ad369b6a0e04432b7f08d1f74cb9b09b022cad7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:09 np0005592157 podman[79455]: 2026-01-22 13:33:09.053452713 +0000 UTC m=+0.224661707 container init 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:33:09 np0005592157 podman[79455]: 2026-01-22 13:33:09.061244324 +0000 UTC m=+0.232453268 container start 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:09 np0005592157 podman[79455]: 2026-01-22 13:33:09.066423331 +0000 UTC m=+0.237632295 container attach 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:33:09 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:33:09 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:33:09 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:33:09 np0005592157 modest_einstein[79505]: 
Jan 22 08:33:09 np0005592157 modest_einstein[79505]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 08:33:09 np0005592157 systemd[1]: libpod-0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10.scope: Deactivated successfully.
Jan 22 08:33:09 np0005592157 podman[79455]: 2026-01-22 13:33:09.747845028 +0000 UTC m=+0.919053972 container died 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:33:10 np0005592157 python3[79887]: ansible-ansible.legacy.async_status Invoked with jid=j312521060932.79334 mode=status _async_dir=/root/.ansible_async
Jan 22 08:33:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:10 np0005592157 ceph-mon[74359]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:33:10 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:33:10 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:33:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b35ccaaf99b1887f601844ca3ad369b6a0e04432b7f08d1f74cb9b09b022cad7-merged.mount: Deactivated successfully.
Jan 22 08:33:11 np0005592157 podman[79455]: 2026-01-22 13:33:11.308466962 +0000 UTC m=+2.479675926 container remove 0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10 (image=quay.io/ceph/ceph:v18, name=modest_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:11 np0005592157 systemd[1]: libpod-conmon-0cccee7c9615e789c8f1e582c64e3d7e4e3291f79498623c2ed15ccd5b38bf10.scope: Deactivated successfully.
Jan 22 08:33:11 np0005592157 ansible-async_wrapper.py[79401]: Module complete (79401)
Jan 22 08:33:11 np0005592157 python3[80380]: ansible-ansible.legacy.async_status Invoked with jid=j312521060932.79334 mode=status _async_dir=/root/.ansible_async
Jan 22 08:33:11 np0005592157 python3[80552]: ansible-ansible.legacy.async_status Invoked with jid=j312521060932.79334 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 08:33:11 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:33:11 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:33:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:12 np0005592157 python3[80729]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:33:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:12 np0005592157 ceph-mon[74359]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:33:12 np0005592157 python3[80944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:12 np0005592157 podman[81006]: 2026-01-22 13:33:12.788833108 +0000 UTC m=+0.020362270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:13 np0005592157 podman[81006]: 2026-01-22 13:33:13.612360198 +0000 UTC m=+0.843889390 container create b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:33:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:13 np0005592157 ansible-async_wrapper.py[79397]: Done in kid B.
Jan 22 08:33:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:13 np0005592157 ceph-mon[74359]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:33:13 np0005592157 systemd[1]: Started libpod-conmon-b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f.scope.
Jan 22 08:33:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8fd629e4c9e982a92ae48b3a3d8be677ea3cc77e998fd3edbd55700280b7d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8fd629e4c9e982a92ae48b3a3d8be677ea3cc77e998fd3edbd55700280b7d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8fd629e4c9e982a92ae48b3a3d8be677ea3cc77e998fd3edbd55700280b7d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:14 np0005592157 podman[81006]: 2026-01-22 13:33:14.359547369 +0000 UTC m=+1.591076551 container init b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:33:14 np0005592157 podman[81006]: 2026-01-22 13:33:14.369092292 +0000 UTC m=+1.600621444 container start b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:33:14 np0005592157 podman[81006]: 2026-01-22 13:33:14.861273964 +0000 UTC m=+2.092803126 container attach b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:15 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:33:15 np0005592157 dazzling_greider[81121]: 
Jan 22 08:33:15 np0005592157 dazzling_greider[81121]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 08:33:15 np0005592157 systemd[1]: libpod-b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f.scope: Deactivated successfully.
Jan 22 08:33:15 np0005592157 podman[81006]: 2026-01-22 13:33:15.073297899 +0000 UTC m=+2.304827041 container died b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:33:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:15 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 308e516d-0e42-4c4b-b1a9-7464d0eed85a (Updating crash deployment (+1 -> 1))
Jan 22 08:33:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 22 08:33:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:33:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0b8fd629e4c9e982a92ae48b3a3d8be677ea3cc77e998fd3edbd55700280b7d8-merged.mount: Deactivated successfully.
Jan 22 08:33:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:33:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 22 08:33:16 np0005592157 podman[81006]: 2026-01-22 13:33:16.331144173 +0000 UTC m=+3.562673325 container remove b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f (image=quay.io/ceph/ceph:v18, name=dazzling_greider, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:33:16 np0005592157 systemd[1]: libpod-conmon-b17331e89326c7571459bfc837a21ff3115b423b961e18092d94fba2b79e6c7f.scope: Deactivated successfully.
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:16 np0005592157 python3[81284]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:16 np0005592157 podman[81325]: 2026-01-22 13:33:16.933638476 +0000 UTC m=+0.078772371 container create e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 08:33:16 np0005592157 podman[81325]: 2026-01-22 13:33:16.88116943 +0000 UTC m=+0.026303305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:16 np0005592157 podman[81323]: 2026-01-22 13:33:16.898549586 +0000 UTC m=+0.043969058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: Deploying daemon crash.compute-0 on compute-0
Jan 22 08:33:17 np0005592157 systemd[1]: Started libpod-conmon-e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0.scope.
Jan 22 08:33:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d059b6f46105cb896c6407a5c76a74d90274f772d130d3746ca7df7bb026e503/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d059b6f46105cb896c6407a5c76a74d90274f772d130d3746ca7df7bb026e503/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d059b6f46105cb896c6407a5c76a74d90274f772d130d3746ca7df7bb026e503/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:17 np0005592157 podman[81325]: 2026-01-22 13:33:17.274706784 +0000 UTC m=+0.419840639 container init e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:33:17 np0005592157 podman[81325]: 2026-01-22 13:33:17.286292668 +0000 UTC m=+0.431426533 container start e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.287719253 +0000 UTC m=+0.433138705 container create 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:17 np0005592157 podman[81325]: 2026-01-22 13:33:17.291466575 +0000 UTC m=+0.436600480 container attach e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:33:17 np0005592157 systemd[1]: Started libpod-conmon-0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d.scope.
Jan 22 08:33:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.355882663 +0000 UTC m=+0.501302165 container init 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.362078005 +0000 UTC m=+0.507497477 container start 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:33:17 np0005592157 jolly_greider[81358]: 167 167
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.366098054 +0000 UTC m=+0.511517546 container attach 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:17 np0005592157 systemd[1]: libpod-0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d.scope: Deactivated successfully.
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.367839706 +0000 UTC m=+0.513259158 container died 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:33:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8d61f5de74e806c548f69084cff5e96796b9fc915f00bf33368d0a24d6a026c1-merged.mount: Deactivated successfully.
Jan 22 08:33:17 np0005592157 podman[81323]: 2026-01-22 13:33:17.746726601 +0000 UTC m=+0.892146063 container remove 0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_greider, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:33:17 np0005592157 systemd[1]: libpod-conmon-0e6a156d62d7bf76b8dbedf7d4b87a29dbce5b5787b1d6655e33501e13da955d.scope: Deactivated successfully.
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 22 08:33:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3059464065' entity='client.admin' 
Jan 22 08:33:17 np0005592157 systemd[1]: Reloading.
Jan 22 08:33:17 np0005592157 podman[81325]: 2026-01-22 13:33:17.917882725 +0000 UTC m=+1.063016590 container died e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:33:17 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:33:17 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:33:18 np0005592157 systemd[1]: libpod-e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0.scope: Deactivated successfully.
Jan 22 08:33:18 np0005592157 systemd[1]: Reloading.
Jan 22 08:33:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:18 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:33:18 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:33:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d059b6f46105cb896c6407a5c76a74d90274f772d130d3746ca7df7bb026e503-merged.mount: Deactivated successfully.
Jan 22 08:33:18 np0005592157 systemd[1]: Starting Ceph crash.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:33:18 np0005592157 podman[81325]: 2026-01-22 13:33:18.5464896 +0000 UTC m=+1.691623455 container remove e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0 (image=quay.io/ceph/ceph:v18, name=loving_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:33:18 np0005592157 systemd[1]: libpod-conmon-e9f70f831de54632d8b40801e1324900dca5b61c688d3b98b29a08435a6413e0.scope: Deactivated successfully.
Jan 22 08:33:18 np0005592157 python3[81562]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:18 np0005592157 podman[81561]: 2026-01-22 13:33:18.788311836 +0000 UTC m=+0.035532132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:18 np0005592157 podman[81561]: 2026-01-22 13:33:18.924744029 +0000 UTC m=+0.171964235 container create 451f24e807fd5d893699749834c02ee3afd2612cd3246b271d7bbf41d75d228d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:18 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3059464065' entity='client.admin' 
Jan 22 08:33:18 np0005592157 podman[81575]: 2026-01-22 13:33:18.957333548 +0000 UTC m=+0.056645509 container create 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:18 np0005592157 systemd[1]: Started libpod-conmon-6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980.scope.
Jan 22 08:33:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89636a8c82438af2b3e133f521008515b2f7d54cf2e749f5a7b7e5048f5ea780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89636a8c82438af2b3e133f521008515b2f7d54cf2e749f5a7b7e5048f5ea780/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89636a8c82438af2b3e133f521008515b2f7d54cf2e749f5a7b7e5048f5ea780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89636a8c82438af2b3e133f521008515b2f7d54cf2e749f5a7b7e5048f5ea780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:18 np0005592157 podman[81561]: 2026-01-22 13:33:18.999348507 +0000 UTC m=+0.246568753 container init 451f24e807fd5d893699749834c02ee3afd2612cd3246b271d7bbf41d75d228d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:19 np0005592157 podman[81561]: 2026-01-22 13:33:19.005669312 +0000 UTC m=+0.252889518 container start 451f24e807fd5d893699749834c02ee3afd2612cd3246b271d7bbf41d75d228d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:33:19 np0005592157 bash[81561]: 451f24e807fd5d893699749834c02ee3afd2612cd3246b271d7bbf41d75d228d
Jan 22 08:33:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:19 np0005592157 systemd[1]: Started Ceph crash.compute-0 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:33:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675f0ffdcadd6ec18f43769db5ea89909427ef39d7d10ca5cf5d1ad1e9f4f90e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675f0ffdcadd6ec18f43769db5ea89909427ef39d7d10ca5cf5d1ad1e9f4f90e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675f0ffdcadd6ec18f43769db5ea89909427ef39d7d10ca5cf5d1ad1e9f4f90e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:19 np0005592157 podman[81575]: 2026-01-22 13:33:18.935174625 +0000 UTC m=+0.034486646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:19 np0005592157 podman[81575]: 2026-01-22 13:33:19.037745638 +0000 UTC m=+0.137057629 container init 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:19 np0005592157 podman[81575]: 2026-01-22 13:33:19.044722269 +0000 UTC m=+0.144034230 container start 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:33:19 np0005592157 podman[81575]: 2026-01-22 13:33:19.048684726 +0000 UTC m=+0.147996727 container attach 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 308e516d-0e42-4c4b-b1a9-7464d0eed85a (Updating crash deployment (+1 -> 1))
Jan 22 08:33:19 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 308e516d-0e42-4c4b-b1a9-7464d0eed85a (Updating crash deployment (+1 -> 1)) in 4 seconds
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aedb4820-85d5-453f-b611-0f5e9f6b1044 does not exist
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c2e70aa6-80b3-46e3-bf82-b761df35e887 does not exist
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.415+0000 7f8c65a54640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.415+0000 7f8c65a54640 -1 AuthRegistry(0x7f8c60066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.416+0000 7f8c65a54640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.416+0000 7f8c65a54640 -1 AuthRegistry(0x7f8c65a53000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.417+0000 7f8c5effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: 2026-01-22T13:33:19.417+0000 7f8c65a54640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 08:33:19 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-0[81591]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 22 08:33:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1574591104' entity='client.admin' 
Jan 22 08:33:20 np0005592157 systemd[1]: libpod-6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980.scope: Deactivated successfully.
Jan 22 08:33:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:20 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1574591104' entity='client.admin' 
Jan 22 08:33:20 np0005592157 podman[81856]: 2026-01-22 13:33:20.820546966 +0000 UTC m=+0.869660382 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:21 np0005592157 podman[81891]: 2026-01-22 13:33:21.036154159 +0000 UTC m=+0.109867763 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:33:21 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 1 completed events
Jan 22 08:33:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:33:21 np0005592157 podman[81856]: 2026-01-22 13:33:21.680335416 +0000 UTC m=+1.729448842 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:33:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:22 np0005592157 podman[81575]: 2026-01-22 13:33:22.256369531 +0000 UTC m=+3.355681512 container died 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:33:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-675f0ffdcadd6ec18f43769db5ea89909427ef39d7d10ca5cf5d1ad1e9f4f90e-merged.mount: Deactivated successfully.
Jan 22 08:33:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:23 np0005592157 podman[81871]: 2026-01-22 13:33:23.782831607 +0000 UTC m=+3.749082942 container remove 6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980 (image=quay.io/ceph/ceph:v18, name=priceless_gould, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:23 np0005592157 systemd[1]: libpod-conmon-6144c6baa8adeaddd3e317052f11b4581cd3634d7b5170e4496850408dd3c980.scope: Deactivated successfully.
Jan 22 08:33:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a54d536c-5ab1-43f0-aa57-61e658f44aa0 does not exist
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f0ff2be1-3d2d-470a-ad10-e972041ea38e does not exist
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af7672e8-0e3d-4d62-97ef-7141a4ea0710 does not exist
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 22 08:33:24 np0005592157 python3[81966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:33:24 np0005592157 podman[82017]: 2026-01-22 13:33:24.237601761 +0000 UTC m=+0.043860236 container create 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:24 np0005592157 systemd[1]: Started libpod-conmon-08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675.scope.
Jan 22 08:33:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187595fbc3baa6af3810d21b54a03239afbc3d2b7df751c30aeea44f2375c294/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187595fbc3baa6af3810d21b54a03239afbc3d2b7df751c30aeea44f2375c294/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187595fbc3baa6af3810d21b54a03239afbc3d2b7df751c30aeea44f2375c294/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:24 np0005592157 podman[82017]: 2026-01-22 13:33:24.215169501 +0000 UTC m=+0.021428026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:24 np0005592157 podman[82017]: 2026-01-22 13:33:24.319378295 +0000 UTC m=+0.125636780 container init 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:33:24 np0005592157 podman[82017]: 2026-01-22 13:33:24.327155875 +0000 UTC m=+0.133414350 container start 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:33:24 np0005592157 podman[82017]: 2026-01-22 13:33:24.33061869 +0000 UTC m=+0.136877165 container attach 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.680691769 +0000 UTC m=+0.048169362 container create dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:33:24 np0005592157 systemd[1]: Started libpod-conmon-dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2.scope.
Jan 22 08:33:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.652617201 +0000 UTC m=+0.020094794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.761635832 +0000 UTC m=+0.129113435 container init dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.768106041 +0000 UTC m=+0.135583614 container start dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.771584526 +0000 UTC m=+0.139062129 container attach dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:24 np0005592157 systemd[1]: libpod-dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2.scope: Deactivated successfully.
Jan 22 08:33:24 np0005592157 youthful_einstein[82185]: 167 167
Jan 22 08:33:24 np0005592157 conmon[82185]: conmon dfa19fdbc03ef3f0c0d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2.scope/container/memory.events
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.777828179 +0000 UTC m=+0.145305752 container died dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-469ba9d5f10f3ed4425d077660b38e28734940f74e5bbd8a785f1b8e3e792e61-merged.mount: Deactivated successfully.
Jan 22 08:33:24 np0005592157 podman[82152]: 2026-01-22 13:33:24.82599888 +0000 UTC m=+0.193476443 container remove dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:24 np0005592157 systemd[1]: libpod-conmon-dfa19fdbc03ef3f0c0d5b3c07510e5ec31e2f2b1e3cd1ef8c1ad249577ed5ae2.scope: Deactivated successfully.
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1043235686' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.nyayzk (unknown last config time)...
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.nyayzk (unknown last config time)...
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:33:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1043235686' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1043235686' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 08:33:25 np0005592157 goofy_cray[82065]: set require_min_compat_client to mimic
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 22 08:33:25 np0005592157 systemd[1]: libpod-08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675.scope: Deactivated successfully.
Jan 22 08:33:25 np0005592157 podman[82017]: 2026-01-22 13:33:25.213356362 +0000 UTC m=+1.019614847 container died 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-187595fbc3baa6af3810d21b54a03239afbc3d2b7df751c30aeea44f2375c294-merged.mount: Deactivated successfully.
Jan 22 08:33:25 np0005592157 podman[82017]: 2026-01-22 13:33:25.265127811 +0000 UTC m=+1.071386296 container remove 08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675 (image=quay.io/ceph/ceph:v18, name=goofy_cray, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:33:25 np0005592157 systemd[1]: libpod-conmon-08133b4aade1823d7348d85ffdb266960898935f0bac7f12a8f23a8144190675.scope: Deactivated successfully.
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.523608805 +0000 UTC m=+0.045696111 container create 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:25 np0005592157 systemd[1]: Started libpod-conmon-4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5.scope.
Jan 22 08:33:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.504894356 +0000 UTC m=+0.026981692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.603061182 +0000 UTC m=+0.125148508 container init 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.61562914 +0000 UTC m=+0.137716446 container start 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.619629748 +0000 UTC m=+0.141717104 container attach 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:25 np0005592157 vibrant_morse[82354]: 167 167
Jan 22 08:33:25 np0005592157 systemd[1]: libpod-4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5.scope: Deactivated successfully.
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.621852522 +0000 UTC m=+0.143939868 container died 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:33:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7dc97e9843c3d476759a8af402053a154857ceb74a20a48ec9091a2ffc0e2efc-merged.mount: Deactivated successfully.
Jan 22 08:33:25 np0005592157 podman[82337]: 2026-01-22 13:33:25.670092444 +0000 UTC m=+0.192179770 container remove 4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:25 np0005592157 systemd[1]: libpod-conmon-4996e4dc7a86a0b8cf43e18ff1bbc58974af777309a42a0d96a64de7966bf6f5.scope: Deactivated successfully.
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:33:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7f0096c8-e51b-433c-84b9-1f530e8a7bce does not exist
Jan 22 08:33:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4b965220-d247-43c7-9295-03e02fdca637 does not exist
Jan 22 08:33:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af71e205-d3e7-43ee-84d2-da00afade8aa does not exist
Jan 22 08:33:26 np0005592157 python3[82450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:26 np0005592157 podman[82451]: 2026-01-22 13:33:26.11757511 +0000 UTC m=+0.045700241 container create c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 22 08:33:26 np0005592157 systemd[1]: Started libpod-conmon-c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41.scope.
Jan 22 08:33:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:26 np0005592157 podman[82451]: 2026-01-22 13:33:26.09838579 +0000 UTC m=+0.026510931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: Reconfiguring mgr.compute-0.nyayzk (unknown last config time)...
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1043235686' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00eac5ae099f26fb954ddba9c276348f80491b2d89d45f3af2ab4fe8aa48a646/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00eac5ae099f26fb954ddba9c276348f80491b2d89d45f3af2ab4fe8aa48a646/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00eac5ae099f26fb954ddba9c276348f80491b2d89d45f3af2ab4fe8aa48a646/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:26 np0005592157 podman[82451]: 2026-01-22 13:33:26.224049179 +0000 UTC m=+0.152174320 container init c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:26 np0005592157 podman[82451]: 2026-01-22 13:33:26.22937559 +0000 UTC m=+0.157500721 container start c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 22 08:33:26 np0005592157 podman[82451]: 2026-01-22 13:33:26.232266101 +0000 UTC m=+0.160391302 container attach c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:26 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:27 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Added host compute-0
Jan 22 08:33:27 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:33:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f93f5ab5-5260-4c70-aca9-1cf528a61b4b does not exist
Jan 22 08:33:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d716b38e-dcc1-4cb2-9650-abfc69d00630 does not exist
Jan 22 08:33:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 21a6dba3-44fd-45bc-a73a-7e302159eca8 does not exist
Jan 22 08:33:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: Added host compute-0
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:33:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:28 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 22 08:33:28 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 22 08:33:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:30 np0005592157 ceph-mon[74359]: Deploying cephadm binary to compute-1
Jan 22 08:33:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:32 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Added host compute-1
Jan 22 08:33:32 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 22 08:33:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:33:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:33:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:34 np0005592157 ceph-mon[74359]: Added host compute-1
Jan 22 08:33:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:34 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 22 08:33:34 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 22 08:33:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:35 np0005592157 ceph-mon[74359]: Deploying cephadm binary to compute-2
Jan 22 08:33:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:33:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Added host compute-2
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 22 08:33:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Added host 'compute-0' with addr '192.168.122.100'
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Added host 'compute-1' with addr '192.168.122.101'
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Added host 'compute-2' with addr '192.168.122.102'
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Scheduled mon update...
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Scheduled mgr update...
Jan 22 08:33:38 np0005592157 blissful_curran[82466]: Scheduled osd.default_drive_group update...
Jan 22 08:33:38 np0005592157 systemd[1]: libpod-c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41.scope: Deactivated successfully.
Jan 22 08:33:38 np0005592157 podman[82451]: 2026-01-22 13:33:38.087325328 +0000 UTC m=+12.015450449 container died c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-00eac5ae099f26fb954ddba9c276348f80491b2d89d45f3af2ab4fe8aa48a646-merged.mount: Deactivated successfully.
Jan 22 08:33:38 np0005592157 podman[82451]: 2026-01-22 13:33:38.148373584 +0000 UTC m=+12.076498745 container remove c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41 (image=quay.io/ceph/ceph:v18, name=blissful_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:33:38 np0005592157 systemd[1]: libpod-conmon-c8b95d040d976b593eb2854401ef1b295408cdf81d418509755d517d593d7f41.scope: Deactivated successfully.
Jan 22 08:33:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:38 np0005592157 python3[82699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:33:38 np0005592157 podman[82701]: 2026-01-22 13:33:38.632344514 +0000 UTC m=+0.043962508 container create 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:33:38 np0005592157 systemd[1]: Started libpod-conmon-5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5.scope.
Jan 22 08:33:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:33:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1beb0a412c763a0705cc983d35fddbbf6cbbd72eb97af76938902eba0ac7a317/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1beb0a412c763a0705cc983d35fddbbf6cbbd72eb97af76938902eba0ac7a317/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1beb0a412c763a0705cc983d35fddbbf6cbbd72eb97af76938902eba0ac7a317/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:38 np0005592157 podman[82701]: 2026-01-22 13:33:38.704448561 +0000 UTC m=+0.116066585 container init 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:33:38 np0005592157 podman[82701]: 2026-01-22 13:33:38.614245881 +0000 UTC m=+0.025863885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:33:38 np0005592157 podman[82701]: 2026-01-22 13:33:38.711063143 +0000 UTC m=+0.122681137 container start 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:33:38 np0005592157 podman[82701]: 2026-01-22 13:33:38.713943754 +0000 UTC m=+0.125561758 container attach 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Added host compute-2
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:33:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559045817' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:33:39 np0005592157 hopeful_sanderson[82717]: 
Jan 22 08:33:39 np0005592157 hopeful_sanderson[82717]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":102,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-22T13:31:54.130920+0000","services":{}},"progress_events":{}}
Jan 22 08:33:39 np0005592157 systemd[1]: libpod-5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5.scope: Deactivated successfully.
Jan 22 08:33:39 np0005592157 podman[82701]: 2026-01-22 13:33:39.422655801 +0000 UTC m=+0.834273835 container died 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1beb0a412c763a0705cc983d35fddbbf6cbbd72eb97af76938902eba0ac7a317-merged.mount: Deactivated successfully.
Jan 22 08:33:39 np0005592157 podman[82701]: 2026-01-22 13:33:39.466832413 +0000 UTC m=+0.878450407 container remove 5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5 (image=quay.io/ceph/ceph:v18, name=hopeful_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:33:39 np0005592157 systemd[1]: libpod-conmon-5bef11f77eeac5017a9b71d1fa8a93a719dc1bfeb1ee01b1480132338b7ff5e5.scope: Deactivated successfully.
Jan 22 08:33:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:33:46
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] No pools available
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:33:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:33:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:33:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:34:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:34:07 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:34:07 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:34:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:34:08 np0005592157 ceph-mon[74359]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:34:08 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:34:08 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:34:09 np0005592157 ceph-mon[74359]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:34:09 np0005592157 python3[82778]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:09 np0005592157 podman[82780]: 2026-01-22 13:34:09.770101252 +0000 UTC m=+0.037605904 container create 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 08:34:09 np0005592157 systemd[1]: Started libpod-conmon-923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a.scope.
Jan 22 08:34:09 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:34:09 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:34:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a237a5c1a69b3678807fe008e4eb13f0a92f4ebb093ae67c9d5a934b1b26dc7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a237a5c1a69b3678807fe008e4eb13f0a92f4ebb093ae67c9d5a934b1b26dc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a237a5c1a69b3678807fe008e4eb13f0a92f4ebb093ae67c9d5a934b1b26dc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:09 np0005592157 podman[82780]: 2026-01-22 13:34:09.850462995 +0000 UTC m=+0.117967697 container init 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:09 np0005592157 podman[82780]: 2026-01-22 13:34:09.754883819 +0000 UTC m=+0.022388491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:09 np0005592157 podman[82780]: 2026-01-22 13:34:09.857251212 +0000 UTC m=+0.124755854 container start 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:09 np0005592157 podman[82780]: 2026-01-22 13:34:09.860556243 +0000 UTC m=+0.128060955 container attach 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:34:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656418042' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:34:10 np0005592157 charming_chaum[82797]: 
Jan 22 08:34:10 np0005592157 charming_chaum[82797]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":133,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-22T13:33:48.260436+0000","services":{}},"progress_events":{}}
Jan 22 08:34:10 np0005592157 systemd[1]: libpod-923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a.scope: Deactivated successfully.
Jan 22 08:34:10 np0005592157 podman[82780]: 2026-01-22 13:34:10.525310289 +0000 UTC m=+0.792814941 container died 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:34:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7a237a5c1a69b3678807fe008e4eb13f0a92f4ebb093ae67c9d5a934b1b26dc7-merged.mount: Deactivated successfully.
Jan 22 08:34:10 np0005592157 ceph-mon[74359]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:34:10 np0005592157 podman[82780]: 2026-01-22 13:34:10.580084023 +0000 UTC m=+0.847588665 container remove 923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a (image=quay.io/ceph/ceph:v18, name=charming_chaum, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:34:10 np0005592157 systemd[1]: libpod-conmon-923687392c2f6204ccf88ce17013c73f4c64dae54ae1f4983d64ce3f018ecc4a.scope: Deactivated successfully.
Jan 22 08:34:11 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:34:11 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:34:11 np0005592157 ceph-mon[74359]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 4372901b-e125-4b74-852c-a7a89cc9360a (Updating crash deployment (+1 -> 2))
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:34:12.161+0000 7fdf394d9640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: service_name: mon
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: placement:
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  hosts:
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-0
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-1
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-2
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:34:12.161+0000 7fdf394d9640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: service_name: mgr
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: placement:
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  hosts:
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-0
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-1
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]:  - compute-2
Jan 22 08:34:12 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 22 08:34:12 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 22 08:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: Deploying daemon crash.compute-1 on compute-1
Jan 22 08:34:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 08:34:14 np0005592157 ceph-mon[74359]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 08:34:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 4372901b-e125-4b74-852c-a7a89cc9360a (Updating crash deployment (+1 -> 2))
Jan 22 08:34:16 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 4372901b-e125-4b74-852c-a7a89cc9360a (Updating crash deployment (+1 -> 2)) in 5 seconds
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:17 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 2 completed events
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:34:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.44884739 +0000 UTC m=+0.049062775 container create cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:17 np0005592157 systemd[1]: Started libpod-conmon-cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640.scope.
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.427706832 +0000 UTC m=+0.027922247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.541146816 +0000 UTC m=+0.141362241 container init cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.548339633 +0000 UTC m=+0.148555008 container start cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.55231872 +0000 UTC m=+0.152534105 container attach cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:34:17 np0005592157 bold_satoshi[82988]: 167 167
Jan 22 08:34:17 np0005592157 systemd[1]: libpod-cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640.scope: Deactivated successfully.
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.557386245 +0000 UTC m=+0.157601620 container died cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-57411cc5ffe05c15fa3745f97a289a4a7fa4d139f8fc42d479b3b570242e1053-merged.mount: Deactivated successfully.
Jan 22 08:34:17 np0005592157 podman[82972]: 2026-01-22 13:34:17.592460456 +0000 UTC m=+0.192675831 container remove cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:17 np0005592157 systemd[1]: libpod-conmon-cd7cfb164e07698b312cf7246ec3b719f7c753403f84fdfe689d3327cb8f0640.scope: Deactivated successfully.
Jan 22 08:34:17 np0005592157 podman[83014]: 2026-01-22 13:34:17.763952485 +0000 UTC m=+0.047952358 container create 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:17 np0005592157 systemd[1]: Started libpod-conmon-84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0.scope.
Jan 22 08:34:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592157 podman[83014]: 2026-01-22 13:34:17.744148819 +0000 UTC m=+0.028148702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:17 np0005592157 podman[83014]: 2026-01-22 13:34:17.846733737 +0000 UTC m=+0.130733600 container init 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:34:17 np0005592157 podman[83014]: 2026-01-22 13:34:17.854555949 +0000 UTC m=+0.138555812 container start 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:34:17 np0005592157 podman[83014]: 2026-01-22 13:34:17.858511246 +0000 UTC m=+0.142511109 container attach 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:18 np0005592157 vigilant_panini[83030]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:34:18 np0005592157 vigilant_panini[83030]: --> relative data size: 1.0
Jan 22 08:34:18 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:34:18 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dbf8012c-a884-4617-89df-833bc5f19dbf
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf"} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173383197' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf"}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea"} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/417846911' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea"}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173383197' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf"}]': finished
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/417846911' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea"}]': finished
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:19 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1173383197' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf"}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/417846911' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea"}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1173383197' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf"}]': finished
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/417846911' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea"}]': finished
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 22 08:34:19 np0005592157 lvm[83078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:34:19 np0005592157 lvm[83078]: VG ceph_vg0 finished
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1614174489' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 22 08:34:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301387970' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: stderr: got monmap epoch 1
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: --> Creating keyring file for osd.0
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 22 08:34:19 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid dbf8012c-a884-4617-89df-833bc5f19dbf --setuser ceph --setgroup ceph
Jan 22 08:34:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 22 08:34:21 np0005592157 ceph-mon[74359]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 22 08:34:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: stderr: 2026-01-22T13:34:20.001+0000 7f6406648740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: stderr: 2026-01-22T13:34:20.001+0000 7f6406648740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: stderr: 2026-01-22T13:34:20.001+0000 7f6406648740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: stderr: 2026-01-22T13:34:20.001+0000 7f6406648740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 22 08:34:22 np0005592157 vigilant_panini[83030]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 08:34:22 np0005592157 systemd[1]: libpod-84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0.scope: Deactivated successfully.
Jan 22 08:34:22 np0005592157 systemd[1]: libpod-84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0.scope: Consumed 2.895s CPU time.
Jan 22 08:34:22 np0005592157 podman[83014]: 2026-01-22 13:34:22.610922037 +0000 UTC m=+4.894921930 container died 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7565b8bdaf842495a096e528e49976a15ae79f1370db8766e5811b0f6b21feaf-merged.mount: Deactivated successfully.
Jan 22 08:34:22 np0005592157 podman[83014]: 2026-01-22 13:34:22.684900903 +0000 UTC m=+4.968900796 container remove 84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:34:22 np0005592157 systemd[1]: libpod-conmon-84efd7f5938cb4393b885803bb6667b81eeb2161e557ba6468b9bff664b2e6e0.scope: Deactivated successfully.
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.533618005 +0000 UTC m=+0.061753896 container create 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:34:23 np0005592157 systemd[1]: Started libpod-conmon-0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee.scope.
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.502135043 +0000 UTC m=+0.030270994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.649283845 +0000 UTC m=+0.177419716 container init 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.66048383 +0000 UTC m=+0.188619701 container start 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.665212996 +0000 UTC m=+0.193348867 container attach 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:23 np0005592157 brave_ride[84167]: 167 167
Jan 22 08:34:23 np0005592157 systemd[1]: libpod-0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee.scope: Deactivated successfully.
Jan 22 08:34:23 np0005592157 conmon[84167]: conmon 0b6d70636fb8cadfa5a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee.scope/container/memory.events
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.671317405 +0000 UTC m=+0.199453296 container died 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 08:34:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-48fdbf90bad492193e94f6d5c5f63e9a4713e6539cc7cd8dfdb5ed642a2664b2-merged.mount: Deactivated successfully.
Jan 22 08:34:23 np0005592157 podman[84151]: 2026-01-22 13:34:23.726413228 +0000 UTC m=+0.254549079 container remove 0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:23 np0005592157 systemd[1]: libpod-conmon-0b6d70636fb8cadfa5a44542311a8f7348c0cd1bf20a848d4639cd0296ee99ee.scope: Deactivated successfully.
Jan 22 08:34:23 np0005592157 podman[84191]: 2026-01-22 13:34:23.894680078 +0000 UTC m=+0.049406264 container create 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:23 np0005592157 systemd[1]: Started libpod-conmon-322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a.scope.
Jan 22 08:34:23 np0005592157 podman[84191]: 2026-01-22 13:34:23.876123203 +0000 UTC m=+0.030849409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242d1c8676c61443c9a82c7c0ad248a13a212061197241c77746e598961646c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242d1c8676c61443c9a82c7c0ad248a13a212061197241c77746e598961646c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242d1c8676c61443c9a82c7c0ad248a13a212061197241c77746e598961646c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242d1c8676c61443c9a82c7c0ad248a13a212061197241c77746e598961646c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:24 np0005592157 podman[84191]: 2026-01-22 13:34:24.007033266 +0000 UTC m=+0.161759482 container init 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:24 np0005592157 podman[84191]: 2026-01-22 13:34:24.024194547 +0000 UTC m=+0.178920743 container start 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:34:24 np0005592157 podman[84191]: 2026-01-22 13:34:24.02797121 +0000 UTC m=+0.182697436 container attach 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:34:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]: {
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:    "0": [
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:        {
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "devices": [
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "/dev/loop3"
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            ],
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "lv_name": "ceph_lv0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "lv_size": "7511998464",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "name": "ceph_lv0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "tags": {
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.cluster_name": "ceph",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.crush_device_class": "",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.encrypted": "0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.osd_id": "0",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.type": "block",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:                "ceph.vdo": "0"
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            },
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "type": "block",
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:            "vg_name": "ceph_vg0"
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:        }
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]:    ]
Jan 22 08:34:24 np0005592157 vibrant_khayyam[84207]: }
Jan 22 08:34:24 np0005592157 systemd[1]: libpod-322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a.scope: Deactivated successfully.
Jan 22 08:34:24 np0005592157 podman[84191]: 2026-01-22 13:34:24.844740497 +0000 UTC m=+0.999466693 container died 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:34:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-242d1c8676c61443c9a82c7c0ad248a13a212061197241c77746e598961646c4-merged.mount: Deactivated successfully.
Jan 22 08:34:24 np0005592157 podman[84191]: 2026-01-22 13:34:24.903881649 +0000 UTC m=+1.058607835 container remove 322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:34:24 np0005592157 systemd[1]: libpod-conmon-322cdf32fb0d75c78d3d789e633edde15935e1346bea3e791b17e594aaea5d4a.scope: Deactivated successfully.
Jan 22 08:34:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 22 08:34:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:34:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 22 08:34:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 22 08:34:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.699512438 +0000 UTC m=+0.046606205 container create 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:34:25 np0005592157 systemd[1]: Started libpod-conmon-4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580.scope.
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.67878742 +0000 UTC m=+0.025881217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.795607987 +0000 UTC m=+0.142701754 container init 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.802497106 +0000 UTC m=+0.149590873 container start 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.806131655 +0000 UTC m=+0.153225442 container attach 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:34:25 np0005592157 intelligent_keldysh[84388]: 167 167
Jan 22 08:34:25 np0005592157 systemd[1]: libpod-4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580.scope: Deactivated successfully.
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.809414296 +0000 UTC m=+0.156508063 container died 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:34:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-db4a365eb16c96bde897a9e260a9370855494ed86c9af6d7204a68a7d29ab374-merged.mount: Deactivated successfully.
Jan 22 08:34:25 np0005592157 podman[84371]: 2026-01-22 13:34:25.8499072 +0000 UTC m=+0.197000967 container remove 4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:34:25 np0005592157 systemd[1]: libpod-conmon-4a9402983b7921fb159584ea75bc6824a349eb3063808cdfbafd7b2a30cd4580.scope: Deactivated successfully.
Jan 22 08:34:26 np0005592157 podman[84420]: 2026-01-22 13:34:26.144842089 +0000 UTC m=+0.041113510 container create 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:26 np0005592157 systemd[1]: Started libpod-conmon-1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188.scope.
Jan 22 08:34:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592157 podman[84420]: 2026-01-22 13:34:26.129206846 +0000 UTC m=+0.025478297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:26 np0005592157 podman[84420]: 2026-01-22 13:34:26.238417656 +0000 UTC m=+0.134689077 container init 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:26 np0005592157 podman[84420]: 2026-01-22 13:34:26.251109968 +0000 UTC m=+0.147381389 container start 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:34:26 np0005592157 podman[84420]: 2026-01-22 13:34:26.255887055 +0000 UTC m=+0.152158486 container attach 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:34:26 np0005592157 ceph-mon[74359]: Deploying daemon osd.0 on compute-0
Jan 22 08:34:26 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test[84436]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 22 08:34:26 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test[84436]:                            [--no-systemd] [--no-tmpfs]
Jan 22 08:34:26 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test[84436]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 08:34:27 np0005592157 systemd[1]: libpod-1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188.scope: Deactivated successfully.
Jan 22 08:34:27 np0005592157 podman[84420]: 2026-01-22 13:34:27.01713293 +0000 UTC m=+0.913404371 container died 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e9b83678a0c5731cf47cf879aad8775560fede493a4b534da113e04c700c8821-merged.mount: Deactivated successfully.
Jan 22 08:34:27 np0005592157 podman[84420]: 2026-01-22 13:34:27.068594874 +0000 UTC m=+0.964866295 container remove 1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:27 np0005592157 systemd[1]: libpod-conmon-1aec4e670549a364a41c67f42d9e37b09ce7c3d5ff8f03d847bb85de178f7188.scope: Deactivated successfully.
Jan 22 08:34:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:34:27 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:27 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:34:27 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:27 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 22 08:34:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:34:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:34:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:34:27 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 22 08:34:27 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 22 08:34:27 np0005592157 systemd[1]: Starting Ceph osd.0 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:34:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:28 np0005592157 podman[84593]: 2026-01-22 13:34:28.237232418 +0000 UTC m=+0.025723913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:34:28 np0005592157 podman[84593]: 2026-01-22 13:34:28.414898119 +0000 UTC m=+0.203389614 container create f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:34:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:28 np0005592157 podman[84593]: 2026-01-22 13:34:28.499162357 +0000 UTC m=+0.287653862 container init f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:28 np0005592157 podman[84593]: 2026-01-22 13:34:28.516971684 +0000 UTC m=+0.305463189 container start f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:34:28 np0005592157 podman[84593]: 2026-01-22 13:34:28.521194598 +0000 UTC m=+0.309686073 container attach f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:29 np0005592157 ceph-mon[74359]: Deploying daemon osd.1 on compute-1
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:29 np0005592157 bash[84593]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 08:34:29 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate[84610]: --> ceph-volume raw activate successful for osd ID: 0
Jan 22 08:34:29 np0005592157 bash[84593]: --> ceph-volume raw activate successful for osd ID: 0
Jan 22 08:34:29 np0005592157 systemd[1]: libpod-f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4.scope: Deactivated successfully.
Jan 22 08:34:29 np0005592157 systemd[1]: libpod-f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4.scope: Consumed 1.058s CPU time.
Jan 22 08:34:29 np0005592157 conmon[84610]: conmon f7ad009331cdbd0cf2e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4.scope/container/memory.events
Jan 22 08:34:29 np0005592157 podman[84593]: 2026-01-22 13:34:29.560021227 +0000 UTC m=+1.348512762 container died f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7672fe72e79f818fc1028f5ab817a326bbfef09831657e7c1c01a4aa72ef581b-merged.mount: Deactivated successfully.
Jan 22 08:34:29 np0005592157 podman[84593]: 2026-01-22 13:34:29.70153377 +0000 UTC m=+1.490025255 container remove f7ad009331cdbd0cf2e810ba0adde4748a24515428d1e621a6725631021fa7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:29 np0005592157 podman[84789]: 2026-01-22 13:34:29.910250664 +0000 UTC m=+0.046125634 container create 447e358c079d937e628713c80d8c3e7f89fc0b65ddfb3c5035905de3a290198d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5603bb6d0be48d3413a28181ddef0b3ce927695c6963977712907510e59dba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5603bb6d0be48d3413a28181ddef0b3ce927695c6963977712907510e59dba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5603bb6d0be48d3413a28181ddef0b3ce927695c6963977712907510e59dba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5603bb6d0be48d3413a28181ddef0b3ce927695c6963977712907510e59dba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5603bb6d0be48d3413a28181ddef0b3ce927695c6963977712907510e59dba/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592157 podman[84789]: 2026-01-22 13:34:29.886593093 +0000 UTC m=+0.022468093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:29 np0005592157 podman[84789]: 2026-01-22 13:34:29.991393135 +0000 UTC m=+0.127268155 container init 447e358c079d937e628713c80d8c3e7f89fc0b65ddfb3c5035905de3a290198d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 22 08:34:30 np0005592157 podman[84789]: 2026-01-22 13:34:30.008236539 +0000 UTC m=+0.144111519 container start 447e358c079d937e628713c80d8c3e7f89fc0b65ddfb3c5035905de3a290198d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:30 np0005592157 bash[84789]: 447e358c079d937e628713c80d8c3e7f89fc0b65ddfb3c5035905de3a290198d
Jan 22 08:34:30 np0005592157 systemd[1]: Started Ceph osd.0 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: pidfile_write: ignore empty --pid-file
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede42e1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede42e1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede42e1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede42e1800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede5119800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede5119800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede5119800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede5119800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede5119800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 08:34:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:34:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:34:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede42e1800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: load: jerasure load: lrc 
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.860102288 +0000 UTC m=+0.054741524 container create 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:30 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 08:34:30 np0005592157 systemd[1]: Started libpod-conmon-3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27.scope.
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.833247949 +0000 UTC m=+0.027887175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:30 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.973341048 +0000 UTC m=+0.167980264 container init 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.982499863 +0000 UTC m=+0.177139059 container start 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.986355588 +0000 UTC m=+0.180994814 container attach 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:30 np0005592157 determined_easley[84988]: 167 167
Jan 22 08:34:30 np0005592157 systemd[1]: libpod-3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27.scope: Deactivated successfully.
Jan 22 08:34:30 np0005592157 conmon[84988]: conmon 3ee572312ed4598727c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27.scope/container/memory.events
Jan 22 08:34:30 np0005592157 podman[84968]: 2026-01-22 13:34:30.993206796 +0000 UTC m=+0.187845992 container died 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-01e93c11ab949ea2bd6c85eb484efb51bba9573517ec47c793c5d75b6701783f-merged.mount: Deactivated successfully.
Jan 22 08:34:31 np0005592157 podman[84968]: 2026-01-22 13:34:31.031060155 +0000 UTC m=+0.225699361 container remove 3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:31 np0005592157 systemd[1]: libpod-conmon-3ee572312ed4598727c3192f830835098a01275b312c69ba1a0129ddf9c66e27.scope: Deactivated successfully.
Jan 22 08:34:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519ac00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs mount
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs mount shared_bdev_used = 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Git sha 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DB SUMMARY
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DB Session ID:  FYLXJ1729S3VNZMLE2AW
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                     Options.env: 0x55ede516bc70
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                Options.info_log: 0x55ede435eba0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.write_buffer_manager: 0x55ede5274460
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.row_cache: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.wal_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.wal_compression: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Compression algorithms supported:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZSTD supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1cdb96b6-81fd-4e60-bf90-9d7b186cab25
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871186181, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871186475, "job": 1, "event": "recovery_finished"}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: freelist init
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: freelist _read_cfg
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs umount
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 08:34:31 np0005592157 podman[85012]: 2026-01-22 13:34:31.213876572 +0000 UTC m=+0.048175493 container create 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:34:31 np0005592157 systemd[1]: Started libpod-conmon-5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662.scope.
Jan 22 08:34:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600a1a0f5c8f107e6a0ad30b614de2926a7970fbee2cdef7cc5b4811e3fb8a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600a1a0f5c8f107e6a0ad30b614de2926a7970fbee2cdef7cc5b4811e3fb8a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600a1a0f5c8f107e6a0ad30b614de2926a7970fbee2cdef7cc5b4811e3fb8a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592157 podman[85012]: 2026-01-22 13:34:31.195168103 +0000 UTC m=+0.029467044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600a1a0f5c8f107e6a0ad30b614de2926a7970fbee2cdef7cc5b4811e3fb8a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592157 podman[85012]: 2026-01-22 13:34:31.303097242 +0000 UTC m=+0.137396193 container init 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 08:34:31 np0005592157 podman[85012]: 2026-01-22 13:34:31.312862622 +0000 UTC m=+0.147161583 container start 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:34:31 np0005592157 podman[85012]: 2026-01-22 13:34:31.31646884 +0000 UTC m=+0.150767841 container attach 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bdev(0x55ede519b400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs mount
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluefs mount shared_bdev_used = 4718592
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Git sha 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DB SUMMARY
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DB Session ID:  FYLXJ1729S3VNZMLE2AX
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                     Options.env: 0x55ede43a03f0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                Options.info_log: 0x55ede433b580
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.write_buffer_manager: 0x55ede5274960
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.row_cache: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                              Options.wal_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.wal_compression: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Compression algorithms supported:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZSTD supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4354f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4355610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4355610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ede435f100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ede4355610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1cdb96b6-81fd-4e60-bf90-9d7b186cab25
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871456156, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871465064, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088871, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1cdb96b6-81fd-4e60-bf90-9d7b186cab25", "db_session_id": "FYLXJ1729S3VNZMLE2AX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871468318, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088871, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1cdb96b6-81fd-4e60-bf90-9d7b186cab25", "db_session_id": "FYLXJ1729S3VNZMLE2AX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871474040, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088871, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1cdb96b6-81fd-4e60-bf90-9d7b186cab25", "db_session_id": "FYLXJ1729S3VNZMLE2AX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088871475983, "job": 1, "event": "recovery_finished"}
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ede4412700
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: DB pointer 0x55ede525da00
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 460.80 MB usag
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: _get_class not permitted to load lua
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: _get_class not permitted to load sdk
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: _get_class not permitted to load test_remote_reads
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 load_pgs
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 load_pgs opened 0 pgs
Jan 22 08:34:31 np0005592157 ceph-osd[84809]: osd.0 0 log_to_monitors true
Jan 22 08:34:31 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0[84805]: 2026-01-22T13:34:31.513+0000 7fc4c2a3a740 -1 osd.0 0 log_to_monitors true
Jan 22 08:34:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 22 08:34:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:32 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:32 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]: {
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:        "osd_id": 0,
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:        "type": "bluestore"
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]:    }
Jan 22 08:34:32 np0005592157 hopeful_babbage[85224]: }
Jan 22 08:34:32 np0005592157 systemd[1]: libpod-5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662.scope: Deactivated successfully.
Jan 22 08:34:32 np0005592157 podman[85012]: 2026-01-22 13:34:32.282359388 +0000 UTC m=+1.116658309 container died 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:34:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-600a1a0f5c8f107e6a0ad30b614de2926a7970fbee2cdef7cc5b4811e3fb8a04-merged.mount: Deactivated successfully.
Jan 22 08:34:32 np0005592157 podman[85012]: 2026-01-22 13:34:32.369054336 +0000 UTC m=+1.203353257 container remove 5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:34:32 np0005592157 systemd[1]: libpod-conmon-5fe516777586cab60584d3f3b7ff4332170b3f96bc7f276721925c6e3021c662.scope: Deactivated successfully.
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:32 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 08:34:32 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 done with init, starting boot process
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 start_boot
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 08:34:33 np0005592157 ceph-osd[84809]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:33 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:34 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:34 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:34 np0005592157 ceph-mon[74359]: from='osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 22 08:34:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 22 08:34:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:34:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:37 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:37 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:37 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:37 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: from='osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 podman[85692]: 2026-01-22 13:34:38.131517749 +0000 UTC m=+0.106399212 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:38 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:38 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 08:34:38 np0005592157 podman[85692]: 2026-01-22 13:34:38.288532193 +0000 UTC m=+0.263413606 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:34:38 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:38 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.469 iops: 4472.188 elapsed_sec: 0.671
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: log_channel(cluster) log [WRN] : OSD bench result of 4472.187817 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 0 waiting for initial osdmap
Jan 22 08:34:38 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0[84805]: 2026-01-22T13:34:38.827+0000 7fc4be9ba640 -1 osd.0 0 waiting for initial osdmap
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:34:38 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-0[84805]: 2026-01-22T13:34:38.856+0000 7fc4b9fe2640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 22 08:34:38 np0005592157 ceph-osd[84809]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:34:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:39 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3137458487; not ready for session (expect reconnect)
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 08:34:39 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:39 np0005592157 ceph-osd[84809]: osd.0 9 tick checking mon for new map
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: OSD bench result of 4472.187817 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487] boot
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:39 np0005592157 ceph-osd[84809]: osd.0 10 state: booting -> active
Jan 22 08:34:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.237686856 +0000 UTC m=+0.044550804 container create 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:40 np0005592157 systemd[1]: Started libpod-conmon-7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8.scope.
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.216427934 +0000 UTC m=+0.023291912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.333370385 +0000 UTC m=+0.140234333 container init 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.340398147 +0000 UTC m=+0.147262085 container start 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.344263152 +0000 UTC m=+0.151127150 container attach 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:34:40 np0005592157 elegant_booth[86064]: 167 167
Jan 22 08:34:40 np0005592157 systemd[1]: libpod-7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8.scope: Deactivated successfully.
Jan 22 08:34:40 np0005592157 conmon[86064]: conmon 7a0ea500ebd1b36572e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8.scope/container/memory.events
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.349787938 +0000 UTC m=+0.156651886 container died 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b0a09e5468fd2fce681d922ee6feba640ee3df5f5075170cab71792ed3cc1bf9-merged.mount: Deactivated successfully.
Jan 22 08:34:40 np0005592157 podman[86048]: 2026-01-22 13:34:40.393949592 +0000 UTC m=+0.200813540 container remove 7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:40 np0005592157 systemd[1]: libpod-conmon-7a0ea500ebd1b36572e05e01290aad9ae7b6442b695891f3e66cf186740192d8.scope: Deactivated successfully.
Jan 22 08:34:40 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] creating mgr pool
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 22 08:34:40 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:40 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:40 np0005592157 podman[86089]: 2026-01-22 13:34:40.563014122 +0000 UTC m=+0.053651658 container create 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:34:40 np0005592157 systemd[1]: Started libpod-conmon-6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1.scope.
Jan 22 08:34:40 np0005592157 podman[86089]: 2026-01-22 13:34:40.535994608 +0000 UTC m=+0.026632224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da70bde9f593efb2e4bc1423f4838f5c75e12f0b2349c77813b672337c0e430/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da70bde9f593efb2e4bc1423f4838f5c75e12f0b2349c77813b672337c0e430/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da70bde9f593efb2e4bc1423f4838f5c75e12f0b2349c77813b672337c0e430/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da70bde9f593efb2e4bc1423f4838f5c75e12f0b2349c77813b672337c0e430/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:40 np0005592157 podman[86089]: 2026-01-22 13:34:40.648134111 +0000 UTC m=+0.138771747 container init 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:34:40 np0005592157 podman[86089]: 2026-01-22 13:34:40.660785131 +0000 UTC m=+0.151422667 container start 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:40 np0005592157 podman[86089]: 2026-01-22 13:34:40.664767819 +0000 UTC m=+0.155405355 container attach 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:40 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 22 08:34:40 np0005592157 ceph-osd[84809]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 08:34:40 np0005592157 ceph-osd[84809]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 22 08:34:40 np0005592157 ceph-osd[84809]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: osd.0 [v2:192.168.122.100:6802/3137458487,v1:192.168.122.100:6803/3137458487] boot
Jan 22 08:34:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 22 08:34:40 np0005592157 python3[86136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:40 np0005592157 podman[86138]: 2026-01-22 13:34:40.977025544 +0000 UTC m=+0.054966630 container create c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:41 np0005592157 systemd[1]: Started libpod-conmon-c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e.scope.
Jan 22 08:34:41 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7af2cd110bc45bb1699fb7c8c0cf3fe39c1f4bbf69750fba988d083d879cbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7af2cd110bc45bb1699fb7c8c0cf3fe39c1f4bbf69750fba988d083d879cbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7af2cd110bc45bb1699fb7c8c0cf3fe39c1f4bbf69750fba988d083d879cbd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:41 np0005592157 podman[86138]: 2026-01-22 13:34:40.958432847 +0000 UTC m=+0.036373953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:41 np0005592157 podman[86138]: 2026-01-22 13:34:41.058283718 +0000 UTC m=+0.136224824 container init c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 08:34:41 np0005592157 podman[86138]: 2026-01-22 13:34:41.066528391 +0000 UTC m=+0.144469477 container start c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:41 np0005592157 podman[86138]: 2026-01-22 13:34:41.070356235 +0000 UTC m=+0.148297321 container attach c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:41 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 22 08:34:41 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:41 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:41 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198265472' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:34:41 np0005592157 gracious_stonebraker[86154]: 
Jan 22 08:34:41 np0005592157 gracious_stonebraker[86154]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":164,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1769088879,"num_in_osds":2,"osd_in_since":1769088859,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446980096,"bytes_avail":7065018368,"bytes_total":7511998464},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-22T13:33:48.260436+0000","services":{}},"progress_events":{}}
Jan 22 08:34:41 np0005592157 systemd[1]: libpod-c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e.scope: Deactivated successfully.
Jan 22 08:34:41 np0005592157 podman[86138]: 2026-01-22 13:34:41.772144041 +0000 UTC m=+0.850085127 container died c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:34:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]: [
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:    {
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "available": false,
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "ceph_device": false,
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "lsm_data": {},
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "lvs": [],
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "path": "/dev/sr0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "rejected_reasons": [
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "Has a FileSystem",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "Insufficient space (<5GB)"
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        ],
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        "sys_api": {
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "actuators": null,
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "device_nodes": "sr0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "devname": "sr0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "human_readable_size": "482.00 KB",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "id_bus": "ata",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "model": "QEMU DVD-ROM",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "nr_requests": "2",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "parent": "/dev/sr0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "partitions": {},
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "path": "/dev/sr0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "removable": "1",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "rev": "2.5+",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "ro": "0",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "rotational": "1",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "sas_address": "",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "sas_device_handle": "",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "scheduler_mode": "mq-deadline",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "sectors": 0,
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "sectorsize": "2048",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "size": 493568.0,
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "support_discard": "2048",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "type": "disk",
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:            "vendor": "QEMU"
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:        }
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]:    }
Jan 22 08:34:41 np0005592157 jovial_chaplygin[86106]: ]
Jan 22 08:34:41 np0005592157 systemd[1]: libpod-6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1.scope: Deactivated successfully.
Jan 22 08:34:41 np0005592157 systemd[1]: libpod-6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1.scope: Consumed 1.321s CPU time.
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Jan 22 08:34:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ab7af2cd110bc45bb1699fb7c8c0cf3fe39c1f4bbf69750fba988d083d879cbd-merged.mount: Deactivated successfully.
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:42 np0005592157 podman[86138]: 2026-01-22 13:34:42.317705722 +0000 UTC m=+1.395646808 container remove c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e (image=quay.io/ceph/ceph:v18, name=gracious_stonebraker, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:34:42 np0005592157 systemd[1]: libpod-conmon-c31eb5132351e50b376902a1bffab7089c526b4ac93b311a81dcdfb1c4e25c5e.scope: Deactivated successfully.
Jan 22 08:34:42 np0005592157 podman[86089]: 2026-01-22 13:34:42.388041988 +0000 UTC m=+1.878679554 container died 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:34:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3da70bde9f593efb2e4bc1423f4838f5c75e12f0b2349c77813b672337c0e430-merged.mount: Deactivated successfully.
Jan 22 08:34:42 np0005592157 podman[87264]: 2026-01-22 13:34:42.445705053 +0000 UTC m=+0.468931011 container remove 6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 08:34:42 np0005592157 systemd[1]: libpod-conmon-6f289f169e35a75b9d5d5e927fbd1df12d8e5b8b3929fbb6e3d431271de77cc1.scope: Deactivated successfully.
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/795411386; not ready for session (expect reconnect)
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 08:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 08:34:42 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 08:34:42 np0005592157 python3[87302]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:42 np0005592157 podman[87303]: 2026-01-22 13:34:42.869351081 +0000 UTC m=+0.073508665 container create 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:34:42 np0005592157 systemd[1]: Started libpod-conmon-971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589.scope.
Jan 22 08:34:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:42 np0005592157 podman[87303]: 2026-01-22 13:34:42.845571008 +0000 UTC m=+0.049728622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60837d46817418490924bea37d518a3c96e8645802d92286f195bac751f58868/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60837d46817418490924bea37d518a3c96e8645802d92286f195bac751f58868/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:42 np0005592157 podman[87303]: 2026-01-22 13:34:42.965479861 +0000 UTC m=+0.169637465 container init 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:42 np0005592157 podman[87303]: 2026-01-22 13:34:42.974746048 +0000 UTC m=+0.178903652 container start 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:42 np0005592157 podman[87303]: 2026-01-22 13:34:42.978612883 +0000 UTC m=+0.182770567 container attach 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386] boot
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: OSD bench result of 3467.855722 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: osd.1 [v2:192.168.122.101:6800/795411386,v1:192.168.122.101:6801/795411386] boot
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 22 08:34:44 np0005592157 modest_carver[87318]: pool 'vms' created
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 22 08:34:44 np0005592157 systemd[1]: libpod-971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589.scope: Deactivated successfully.
Jan 22 08:34:44 np0005592157 podman[87303]: 2026-01-22 13:34:44.058801057 +0000 UTC m=+1.262958651 container died 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:34:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-60837d46817418490924bea37d518a3c96e8645802d92286f195bac751f58868-merged.mount: Deactivated successfully.
Jan 22 08:34:44 np0005592157 podman[87303]: 2026-01-22 13:34:44.112315391 +0000 UTC m=+1.316472955 container remove 971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589 (image=quay.io/ceph/ceph:v18, name=modest_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:44 np0005592157 systemd[1]: libpod-conmon-971771d9e5003bd51227a08efd0c8da4979b0ab8d3fdb22569454da31d728589.scope: Deactivated successfully.
Jan 22 08:34:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v65: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] creating main.db for devicehealth
Jan 22 08:34:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 08:34:44 np0005592157 python3[87381]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 08:34:44 np0005592157 podman[87396]: 2026-01-22 13:34:44.574482455 +0000 UTC m=+0.064821182 container create bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:44 np0005592157 systemd[1]: Started libpod-conmon-bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b.scope.
Jan 22 08:34:44 np0005592157 podman[87396]: 2026-01-22 13:34:44.537119758 +0000 UTC m=+0.027458485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae41d46a19548445ae7f1600874e1d9310314a85bfc3c307587130b1f6f0415/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae41d46a19548445ae7f1600874e1d9310314a85bfc3c307587130b1f6f0415/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:44 np0005592157 podman[87396]: 2026-01-22 13:34:44.676586121 +0000 UTC m=+0.166924918 container init bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:44 np0005592157 podman[87396]: 2026-01-22 13:34:44.68466831 +0000 UTC m=+0.175007007 container start bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:44 np0005592157 podman[87396]: 2026-01-22 13:34:44.688950665 +0000 UTC m=+0.179289392 container attach bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v67: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 22 08:34:46 np0005592157 brave_goodall[87411]: pool 'volumes' created
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 22 08:34:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:46 np0005592157 systemd[1]: libpod-bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b.scope: Deactivated successfully.
Jan 22 08:34:46 np0005592157 podman[87396]: 2026-01-22 13:34:46.229263132 +0000 UTC m=+1.719601869 container died bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:34:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bae41d46a19548445ae7f1600874e1d9310314a85bfc3c307587130b1f6f0415-merged.mount: Deactivated successfully.
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:34:46
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Some PGs (0.666667) are unknown; try again later
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:34:46 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:34:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:34:46 np0005592157 podman[87396]: 2026-01-22 13:34:46.47848594 +0000 UTC m=+1.968824627 container remove bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b (image=quay.io/ceph/ceph:v18, name=brave_goodall, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:46 np0005592157 systemd[1]: libpod-conmon-bc2741e4c2b9bfa8c2e3c96d1a2e3271e52f97686b0a325ec0308eda4a3dfa1b.scope: Deactivated successfully.
Jan 22 08:34:46 np0005592157 python3[87477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:46 np0005592157 podman[87478]: 2026-01-22 13:34:46.927840349 +0000 UTC m=+0.054946339 container create 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:46 np0005592157 systemd[1]: Started libpod-conmon-3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c.scope.
Jan 22 08:34:46 np0005592157 podman[87478]: 2026-01-22 13:34:46.905247555 +0000 UTC m=+0.032353565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feaddc952d9ccbb50549617a75ab26d70c54a7e45d967538b21e8de47431c1d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feaddc952d9ccbb50549617a75ab26d70c54a7e45d967538b21e8de47431c1d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:47 np0005592157 podman[87478]: 2026-01-22 13:34:47.118996811 +0000 UTC m=+0.246102821 container init 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:34:47 np0005592157 podman[87478]: 2026-01-22 13:34:47.129839337 +0000 UTC m=+0.256945327 container start 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 22 08:34:47 np0005592157 podman[87478]: 2026-01-22 13:34:47.311020195 +0000 UTC m=+0.438126215 container attach 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 22 08:34:47 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev dc2ec29e-b0c9-4368-a179-d98877394a2c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:34:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v70: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 22 08:34:48 np0005592157 cool_pare[87493]: pool 'backups' created
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 632e0b7e-fdc7-438d-8940-1f14c4011e28 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:34:48 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev dc2ec29e-b0c9-4368-a179-d98877394a2c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event dc2ec29e-b0c9-4368-a179-d98877394a2c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 1 seconds
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 632e0b7e-fdc7-438d-8940-1f14c4011e28 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 22 08:34:48 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 632e0b7e-fdc7-438d-8940-1f14c4011e28 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 0 seconds
Jan 22 08:34:48 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:48 np0005592157 systemd[1]: libpod-3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c.scope: Deactivated successfully.
Jan 22 08:34:48 np0005592157 podman[87478]: 2026-01-22 13:34:48.358544127 +0000 UTC m=+1.485650107 container died 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:34:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-feaddc952d9ccbb50549617a75ab26d70c54a7e45d967538b21e8de47431c1d8-merged.mount: Deactivated successfully.
Jan 22 08:34:48 np0005592157 podman[87478]: 2026-01-22 13:34:48.406016192 +0000 UTC m=+1.533122182 container remove 3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c (image=quay.io/ceph/ceph:v18, name=cool_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:34:48 np0005592157 systemd[1]: libpod-conmon-3dffcc75851ffd94584a828c2996ef22721346349f493e6df1c0ae0b76abf61c.scope: Deactivated successfully.
Jan 22 08:34:48 np0005592157 python3[87556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:48 np0005592157 podman[87557]: 2026-01-22 13:34:48.837670878 +0000 UTC m=+0.067468487 container create 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:34:48 np0005592157 systemd[1]: Started libpod-conmon-9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7.scope.
Jan 22 08:34:48 np0005592157 podman[87557]: 2026-01-22 13:34:48.803206232 +0000 UTC m=+0.033003891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a8d9d6177c8f864bfeb269d9553e6ad113b5e326228d9219c18a1efa06b9fb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a8d9d6177c8f864bfeb269d9553e6ad113b5e326228d9219c18a1efa06b9fb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:48 np0005592157 podman[87557]: 2026-01-22 13:34:48.932854364 +0000 UTC m=+0.162652043 container init 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:48 np0005592157 podman[87557]: 2026-01-22 13:34:48.943124866 +0000 UTC m=+0.172922425 container start 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:34:48 np0005592157 podman[87557]: 2026-01-22 13:34:48.947488953 +0000 UTC m=+0.177286542 container attach 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:49 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v73: 4 pgs: 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 22 08:34:50 np0005592157 vibrant_mestorf[87573]: pool 'images' created
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 22 08:34:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:34:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 20 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=20 pruub=12.947719574s) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active pruub 31.836803436s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:34:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 20 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=20 pruub=12.947719574s) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown pruub 31.836803436s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:50 np0005592157 systemd[1]: libpod-9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7.scope: Deactivated successfully.
Jan 22 08:34:50 np0005592157 podman[87557]: 2026-01-22 13:34:50.401471772 +0000 UTC m=+1.631269341 container died 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 22 08:34:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-65a8d9d6177c8f864bfeb269d9553e6ad113b5e326228d9219c18a1efa06b9fb-merged.mount: Deactivated successfully.
Jan 22 08:34:50 np0005592157 podman[87557]: 2026-01-22 13:34:50.551001342 +0000 UTC m=+1.780798951 container remove 9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7 (image=quay.io/ceph/ceph:v18, name=vibrant_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:34:50 np0005592157 systemd[1]: libpod-conmon-9609c5f4cf049a27b4e4a8b782a8a8fa24e967f6ebb5605a97e756f30065e7e7.scope: Deactivated successfully.
Jan 22 08:34:50 np0005592157 python3[87635]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:50 np0005592157 podman[87636]: 2026-01-22 13:34:50.99338484 +0000 UTC m=+0.095555476 container create ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 08:34:51 np0005592157 podman[87636]: 2026-01-22 13:34:50.920180834 +0000 UTC m=+0.022351490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:51 np0005592157 systemd[1]: Started libpod-conmon-ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d.scope.
Jan 22 08:34:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0dc20c01ee05ebc971a03dfb5a42125703f4494af89cd5ccd8642af46406be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0dc20c01ee05ebc971a03dfb5a42125703f4494af89cd5ccd8642af46406be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:51 np0005592157 podman[87636]: 2026-01-22 13:34:51.113305724 +0000 UTC m=+0.215476370 container init ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:51 np0005592157 podman[87636]: 2026-01-22 13:34:51.121088795 +0000 UTC m=+0.223259431 container start ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:34:51 np0005592157 podman[87636]: 2026-01-22 13:34:51.124756415 +0000 UTC m=+0.226927051 container attach ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1f( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1c( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1e( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1d( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.a( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.9( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.8( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.5( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.4( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.3( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.2( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1b( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.6( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.7( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.c( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.b( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.d( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.e( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.f( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.10( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.11( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.12( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.14( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.13( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.15( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.16( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.17( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.18( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.19( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1a( empty local-lis/les=16/17 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1e( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.2( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.4( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.6( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.7( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.12( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.19( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.18( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.17( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1f( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 21 pg[3.b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=16/16 les/c/f=17/17/0 sis=20) [0] r=0 lpr=20 pi=[16,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v76: 67 pgs: 63 unknown, 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:52 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 4 completed events
Jan 22 08:34:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:34:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 22 08:34:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 22 08:34:53 np0005592157 naughty_buck[87652]: pool 'cephfs.cephfs.meta' created
Jan 22 08:34:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 22 08:34:53 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:53 np0005592157 systemd[1]: libpod-ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d.scope: Deactivated successfully.
Jan 22 08:34:53 np0005592157 podman[87636]: 2026-01-22 13:34:53.086060697 +0000 UTC m=+2.188231343 container died ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:34:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3b0dc20c01ee05ebc971a03dfb5a42125703f4494af89cd5ccd8642af46406be-merged.mount: Deactivated successfully.
Jan 22 08:34:53 np0005592157 podman[87636]: 2026-01-22 13:34:53.135827158 +0000 UTC m=+2.237997794 container remove ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d (image=quay.io/ceph/ceph:v18, name=naughty_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:53 np0005592157 systemd[1]: libpod-conmon-ae0e803874001995560e7c8be191d53ea2b8122b76ace23eb1f2ace7bd5ace1d.scope: Deactivated successfully.
Jan 22 08:34:53 np0005592157 python3[87717]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:53 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 22 08:34:53 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 22 08:34:53 np0005592157 podman[87718]: 2026-01-22 13:34:53.479877093 +0000 UTC m=+0.044558485 container create 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:53 np0005592157 podman[87718]: 2026-01-22 13:34:53.463143662 +0000 UTC m=+0.027825074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:53 np0005592157 systemd[1]: Started libpod-conmon-5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517.scope.
Jan 22 08:34:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d827b02571ac99cb58710beddafde81fe6dc8580c68943ba21677bab47a9ffbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d827b02571ac99cb58710beddafde81fe6dc8580c68943ba21677bab47a9ffbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:53 np0005592157 podman[87718]: 2026-01-22 13:34:53.895096684 +0000 UTC m=+0.459778096 container init 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:34:53 np0005592157 podman[87718]: 2026-01-22 13:34:53.901911682 +0000 UTC m=+0.466593074 container start 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:34:53 np0005592157 podman[87718]: 2026-01-22 13:34:53.996722459 +0000 UTC m=+0.561403941 container attach 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v78: 68 pgs: 33 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 22 08:34:54 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 22 08:34:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 22 08:34:55 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:34:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 22 08:34:55 np0005592157 upbeat_ptolemy[87733]: pool 'cephfs.cephfs.data' created
Jan 22 08:34:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 22 08:34:55 np0005592157 systemd[1]: libpod-5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517.scope: Deactivated successfully.
Jan 22 08:34:55 np0005592157 podman[87718]: 2026-01-22 13:34:55.519918387 +0000 UTC m=+2.084599819 container died 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:34:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d827b02571ac99cb58710beddafde81fe6dc8580c68943ba21677bab47a9ffbf-merged.mount: Deactivated successfully.
Jan 22 08:34:55 np0005592157 systemd[75969]: Starting Mark boot as successful...
Jan 22 08:34:55 np0005592157 systemd[75969]: Finished Mark boot as successful.
Jan 22 08:34:55 np0005592157 podman[87718]: 2026-01-22 13:34:55.574464236 +0000 UTC m=+2.139145668 container remove 5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517 (image=quay.io/ceph/ceph:v18, name=upbeat_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:34:55 np0005592157 systemd[1]: libpod-conmon-5e15d7ac403e057d6d60f4308f8cf98c1e3649e354f95fb2020b5feaa0f5b517.scope: Deactivated successfully.
Jan 22 08:34:55 np0005592157 python3[87799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:56 np0005592157 podman[87800]: 2026-01-22 13:34:56.000334789 +0000 UTC m=+0.046564154 container create 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:56 np0005592157 systemd[1]: Started libpod-conmon-1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc.scope.
Jan 22 08:34:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d335a8c5ba1e63f540df88558ab149f959451a09de13a16cc6fd3eae2c7b2c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d335a8c5ba1e63f540df88558ab149f959451a09de13a16cc6fd3eae2c7b2c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:56 np0005592157 podman[87800]: 2026-01-22 13:34:56.0736892 +0000 UTC m=+0.119918595 container init 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:34:56 np0005592157 podman[87800]: 2026-01-22 13:34:55.978286588 +0000 UTC m=+0.024515973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:56 np0005592157 podman[87800]: 2026-01-22 13:34:56.079526773 +0000 UTC m=+0.125756138 container start 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:34:56 np0005592157 podman[87800]: 2026-01-22 13:34:56.083346667 +0000 UTC m=+0.129576052 container attach 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v81: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 22 08:34:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 22 08:34:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 22 08:34:57 np0005592157 keen_ishizaka[87815]: enabled application 'rbd' on pool 'vms'
Jan 22 08:34:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 22 08:34:57 np0005592157 systemd[1]: libpod-1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc.scope: Deactivated successfully.
Jan 22 08:34:57 np0005592157 podman[87800]: 2026-01-22 13:34:57.559309295 +0000 UTC m=+1.605538660 container died 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:34:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1d335a8c5ba1e63f540df88558ab149f959451a09de13a16cc6fd3eae2c7b2c7-merged.mount: Deactivated successfully.
Jan 22 08:34:57 np0005592157 podman[87800]: 2026-01-22 13:34:57.700200922 +0000 UTC m=+1.746430287 container remove 1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc (image=quay.io/ceph/ceph:v18, name=keen_ishizaka, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:57 np0005592157 systemd[1]: libpod-conmon-1a58dfa12d5cc505c706e7218bdcb8b4efdda41385940bac5b2a6267e3a721dc.scope: Deactivated successfully.
Jan 22 08:34:58 np0005592157 python3[87878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:34:58 np0005592157 podman[87879]: 2026-01-22 13:34:58.09198111 +0000 UTC m=+0.054230122 container create c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:34:58 np0005592157 systemd[1]: Started libpod-conmon-c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99.scope.
Jan 22 08:34:58 np0005592157 podman[87879]: 2026-01-22 13:34:58.069496048 +0000 UTC m=+0.031745110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:34:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:34:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdba3e49a79a25e2e5282cf8fdc2360316bbd142d907a72d3dd521be5adbad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdba3e49a79a25e2e5282cf8fdc2360316bbd142d907a72d3dd521be5adbad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:34:58 np0005592157 podman[87879]: 2026-01-22 13:34:58.190540449 +0000 UTC m=+0.152789491 container init c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 08:34:58 np0005592157 podman[87879]: 2026-01-22 13:34:58.197381797 +0000 UTC m=+0.159630809 container start c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:34:58 np0005592157 podman[87879]: 2026-01-22 13:34:58.201328474 +0000 UTC m=+0.163577486 container attach c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:34:58 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 08:34:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 22 08:34:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 08:34:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Jan 22 08:34:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Jan 22 08:34:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 22 08:34:59 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 08:34:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 08:34:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 22 08:34:59 np0005592157 vigilant_brattain[87895]: enabled application 'rbd' on pool 'volumes'
Jan 22 08:34:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 22 08:34:59 np0005592157 systemd[1]: libpod-c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99.scope: Deactivated successfully.
Jan 22 08:34:59 np0005592157 podman[87879]: 2026-01-22 13:34:59.591089297 +0000 UTC m=+1.553338309 container died c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3fdba3e49a79a25e2e5282cf8fdc2360316bbd142d907a72d3dd521be5adbad3-merged.mount: Deactivated successfully.
Jan 22 08:34:59 np0005592157 podman[87879]: 2026-01-22 13:34:59.633687622 +0000 UTC m=+1.595936634 container remove c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99 (image=quay.io/ceph/ceph:v18, name=vigilant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:34:59 np0005592157 systemd[1]: libpod-conmon-c1e4d9400f19cc3c955e7d71371796b3f62fbb81f5ff48ac9dc328b1cf473b99.scope: Deactivated successfully.
Jan 22 08:34:59 np0005592157 python3[87957]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:00 np0005592157 podman[87958]: 2026-01-22 13:35:00.059286339 +0000 UTC m=+0.065118690 container create 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:00 np0005592157 systemd[1]: Started libpod-conmon-40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9.scope.
Jan 22 08:35:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080a0dbcd7c6f26f5b3487357c260a75301246aaee3df158778e25da8ba2633/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080a0dbcd7c6f26f5b3487357c260a75301246aaee3df158778e25da8ba2633/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:00 np0005592157 podman[87958]: 2026-01-22 13:35:00.030194765 +0000 UTC m=+0.036027206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:00 np0005592157 podman[87958]: 2026-01-22 13:35:00.127516474 +0000 UTC m=+0.133348845 container init 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:00 np0005592157 podman[87958]: 2026-01-22 13:35:00.139697013 +0000 UTC m=+0.145529364 container start 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:35:00 np0005592157 podman[87958]: 2026-01-22 13:35:00.144433039 +0000 UTC m=+0.150265390 container attach 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:35:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v86: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828673363s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.915863037s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828557968s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.915863037s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828780174s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.916141510s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828731537s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.916099548s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828747749s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.916141510s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828611374s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.916099548s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828541756s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.916042328s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828477859s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.916042328s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828417778s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.916179657s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.828381538s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.916179657s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832946777s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920879364s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832881927s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920814514s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832929611s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920864105s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832925797s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920867920s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832898140s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920879364s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832879066s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920867920s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832838058s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920814514s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832861900s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920864105s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832804680s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920879364s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832788467s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920879364s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832736969s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920917511s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832742691s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920948029s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832711220s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920928955s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832698822s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920917511s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832715034s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920948029s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832691193s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920948029s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832696915s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920948029s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832661629s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920928955s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832643509s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.920997620s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832619667s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.920997620s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.827182770s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.915611267s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832696915s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 43.921165466s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.827157974s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.915611267s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=14.832674980s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.921165466s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 22 08:35:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 22 08:35:01 np0005592157 sweet_cohen[87973]: enabled application 'rbd' on pool 'backups'
Jan 22 08:35:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592157 systemd[1]: libpod-40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9.scope: Deactivated successfully.
Jan 22 08:35:01 np0005592157 podman[87958]: 2026-01-22 13:35:01.651550812 +0000 UTC m=+1.657383173 container died 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:35:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5080a0dbcd7c6f26f5b3487357c260a75301246aaee3df158778e25da8ba2633-merged.mount: Deactivated successfully.
Jan 22 08:35:01 np0005592157 podman[87958]: 2026-01-22 13:35:01.698717519 +0000 UTC m=+1.704549910 container remove 40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9 (image=quay.io/ceph/ceph:v18, name=sweet_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:35:01 np0005592157 systemd[1]: libpod-conmon-40ad1f5a23585358db1c54789f44fe16987fdd5d0df93912c1b1a6330aeda6e9.scope: Deactivated successfully.
Jan 22 08:35:02 np0005592157 python3[88033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:02 np0005592157 podman[88034]: 2026-01-22 13:35:02.105144675 +0000 UTC m=+0.050162822 container create 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 08:35:02 np0005592157 systemd[1]: Started libpod-conmon-73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c.scope.
Jan 22 08:35:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v89: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:02 np0005592157 podman[88034]: 2026-01-22 13:35:02.085902303 +0000 UTC m=+0.030920470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9f8bb7e3f3b5ca0e3fb66605849a432b31b7104a8724c270cc62c0b2045849/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9f8bb7e3f3b5ca0e3fb66605849a432b31b7104a8724c270cc62c0b2045849/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:02 np0005592157 podman[88034]: 2026-01-22 13:35:02.197976354 +0000 UTC m=+0.142994501 container init 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:35:02 np0005592157 podman[88034]: 2026-01-22 13:35:02.203803887 +0000 UTC m=+0.148822034 container start 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:35:02 np0005592157 podman[88034]: 2026-01-22 13:35:02.208275117 +0000 UTC m=+0.153293294 container attach 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:02 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Jan 22 08:35:02 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 22 08:35:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 08:35:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 22 08:35:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 22 08:35:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 22 08:35:03 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 08:35:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 08:35:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 22 08:35:03 np0005592157 suspicious_boyd[88050]: enabled application 'rbd' on pool 'images'
Jan 22 08:35:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 22 08:35:03 np0005592157 systemd[1]: libpod-73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c.scope: Deactivated successfully.
Jan 22 08:35:03 np0005592157 conmon[88050]: conmon 73fac283327ab114657f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c.scope/container/memory.events
Jan 22 08:35:03 np0005592157 podman[88034]: 2026-01-22 13:35:03.686298936 +0000 UTC m=+1.631317083 container died 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4a9f8bb7e3f3b5ca0e3fb66605849a432b31b7104a8724c270cc62c0b2045849-merged.mount: Deactivated successfully.
Jan 22 08:35:04 np0005592157 podman[88034]: 2026-01-22 13:35:04.13015819 +0000 UTC m=+2.075176337 container remove 73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c (image=quay.io/ceph/ceph:v18, name=suspicious_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 08:35:04 np0005592157 systemd[1]: libpod-conmon-73fac283327ab114657f1eec7afe8a1a53817059efee7b22eea45350554e099c.scope: Deactivated successfully.
Jan 22 08:35:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:04 np0005592157 python3[88112]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:04 np0005592157 podman[88113]: 2026-01-22 13:35:04.52411616 +0000 UTC m=+0.045241201 container create 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 08:35:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 22 08:35:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 22 08:35:04 np0005592157 systemd[1]: Started libpod-conmon-255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8.scope.
Jan 22 08:35:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:04 np0005592157 podman[88113]: 2026-01-22 13:35:04.505845102 +0000 UTC m=+0.026970153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5098b84348a7b25ec3345297a9418e3f4d8900a47575863722dd69cea489a5c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5098b84348a7b25ec3345297a9418e3f4d8900a47575863722dd69cea489a5c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:04 np0005592157 podman[88113]: 2026-01-22 13:35:04.620790683 +0000 UTC m=+0.141915774 container init 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:35:04 np0005592157 podman[88113]: 2026-01-22 13:35:04.627282683 +0000 UTC m=+0.148407744 container start 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:35:04 np0005592157 podman[88113]: 2026-01-22 13:35:04.632188123 +0000 UTC m=+0.153313174 container attach 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:04 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 08:35:05 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 22 08:35:05 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 22 08:35:05 np0005592157 beautiful_neumann[88128]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 22 08:35:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 22 08:35:05 np0005592157 systemd[1]: libpod-255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8.scope: Deactivated successfully.
Jan 22 08:35:05 np0005592157 podman[88113]: 2026-01-22 13:35:05.771838007 +0000 UTC m=+1.292963048 container died 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5098b84348a7b25ec3345297a9418e3f4d8900a47575863722dd69cea489a5c2-merged.mount: Deactivated successfully.
Jan 22 08:35:05 np0005592157 podman[88113]: 2026-01-22 13:35:05.819331533 +0000 UTC m=+1.340456554 container remove 255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8 (image=quay.io/ceph/ceph:v18, name=beautiful_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 08:35:05 np0005592157 systemd[1]: libpod-conmon-255cf1c9ad1bdafe5245dae0ff957b86d4098a36582342fdc6bc8263148a2aa8.scope: Deactivated successfully.
Jan 22 08:35:06 np0005592157 python3[88190]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:06 np0005592157 podman[88191]: 2026-01-22 13:35:06.220866389 +0000 UTC m=+0.043714174 container create 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:35:06 np0005592157 systemd[1]: Started libpod-conmon-6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1.scope.
Jan 22 08:35:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f9ec53ba88f57e4a70fb6a71ae464be56f12c97fd9421286d9fd1aafc1e6d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f9ec53ba88f57e4a70fb6a71ae464be56f12c97fd9421286d9fd1aafc1e6d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:06 np0005592157 podman[88191]: 2026-01-22 13:35:06.200037737 +0000 UTC m=+0.022885572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:06 np0005592157 podman[88191]: 2026-01-22 13:35:06.300420891 +0000 UTC m=+0.123268696 container init 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:06 np0005592157 podman[88191]: 2026-01-22 13:35:06.30687341 +0000 UTC m=+0.129721185 container start 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:06 np0005592157 podman[88191]: 2026-01-22 13:35:06.310566 +0000 UTC m=+0.133413785 container attach 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:06 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 22 08:35:06 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 22 08:35:06 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 08:35:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 22 08:35:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 08:35:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 22 08:35:07 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 08:35:07 np0005592157 ceph-mon[74359]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v94: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 08:35:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Jan 22 08:35:08 np0005592157 crazy_hofstadter[88206]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 22 08:35:08 np0005592157 systemd[1]: libpod-6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1.scope: Deactivated successfully.
Jan 22 08:35:08 np0005592157 podman[88191]: 2026-01-22 13:35:08.340455024 +0000 UTC m=+2.163302849 container died 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Jan 22 08:35:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-30f9ec53ba88f57e4a70fb6a71ae464be56f12c97fd9421286d9fd1aafc1e6d1-merged.mount: Deactivated successfully.
Jan 22 08:35:08 np0005592157 podman[88191]: 2026-01-22 13:35:08.543748544 +0000 UTC m=+2.366596339 container remove 6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1 (image=quay.io/ceph/ceph:v18, name=crazy_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:08 np0005592157 systemd[1]: libpod-conmon-6284ab7a035fd777f38cd7f4ba44bb05ae625f5a6d75ae9675cb499ce6023ca1.scope: Deactivated successfully.
Jan 22 08:35:08 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 08:35:09 np0005592157 python3[88320]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:35:10 np0005592157 python3[88391]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088909.3068125-37405-78917081792176/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:35:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v96: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:10 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 22 08:35:10 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 22 08:35:10 np0005592157 python3[88493]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:35:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:35:11 np0005592157 python3[88568]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088910.453071-37419-147736077493765/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=3ebb71363bbb9ab9492cf8efc000960b41d06d72 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:35:11 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 22 08:35:11 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 22 08:35:11 np0005592157 python3[88618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:11 np0005592157 podman[88619]: 2026-01-22 13:35:11.78080294 +0000 UTC m=+0.075682419 container create 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:35:11 np0005592157 systemd[1]: Started libpod-conmon-4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e.scope.
Jan 22 08:35:11 np0005592157 podman[88619]: 2026-01-22 13:35:11.751907041 +0000 UTC m=+0.046786610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823239968cf43919587805c8c228d86dedfb6ce37d06eff539d5fa8537381722/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823239968cf43919587805c8c228d86dedfb6ce37d06eff539d5fa8537381722/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823239968cf43919587805c8c228d86dedfb6ce37d06eff539d5fa8537381722/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:11 np0005592157 podman[88619]: 2026-01-22 13:35:11.859872461 +0000 UTC m=+0.154751940 container init 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:11 np0005592157 podman[88619]: 2026-01-22 13:35:11.864708519 +0000 UTC m=+0.159587988 container start 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:11 np0005592157 podman[88619]: 2026-01-22 13:35:11.868541314 +0000 UTC m=+0.163420803 container attach 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:35:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:12 np0005592157 ceph-mon[74359]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:35:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 22 08:35:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:35:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:35:12 np0005592157 angry_davinci[88635]: 
Jan 22 08:35:12 np0005592157 angry_davinci[88635]: [global]
Jan 22 08:35:12 np0005592157 angry_davinci[88635]: #011fsid = 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:35:12 np0005592157 angry_davinci[88635]: #011mon_host = 192.168.122.100
Jan 22 08:35:12 np0005592157 systemd[1]: libpod-4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e.scope: Deactivated successfully.
Jan 22 08:35:12 np0005592157 podman[88619]: 2026-01-22 13:35:12.446665254 +0000 UTC m=+0.741544723 container died 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 22 08:35:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-823239968cf43919587805c8c228d86dedfb6ce37d06eff539d5fa8537381722-merged.mount: Deactivated successfully.
Jan 22 08:35:12 np0005592157 podman[88619]: 2026-01-22 13:35:12.496382284 +0000 UTC m=+0.791261753 container remove 4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e (image=quay.io/ceph/ceph:v18, name=angry_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:12 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 22 08:35:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:12 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 22 08:35:12 np0005592157 systemd[1]: libpod-conmon-4e568fba033749953c7346b58a830137b4ae513e12f24cc1d58a032f3c7f0e3e.scope: Deactivated successfully.
Jan 22 08:35:12 np0005592157 python3[88696]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:12 np0005592157 podman[88697]: 2026-01-22 13:35:12.951119816 +0000 UTC m=+0.061585682 container create 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:12 np0005592157 systemd[1]: Started libpod-conmon-544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538.scope.
Jan 22 08:35:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3a887b4b744a702ec6e5ae2fd723394cd15d8608117b944206ac7d4db9f24e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3a887b4b744a702ec6e5ae2fd723394cd15d8608117b944206ac7d4db9f24e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3a887b4b744a702ec6e5ae2fd723394cd15d8608117b944206ac7d4db9f24e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:13 np0005592157 podman[88697]: 2026-01-22 13:35:12.93050228 +0000 UTC m=+0.040968186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:13 np0005592157 podman[88697]: 2026-01-22 13:35:13.293238844 +0000 UTC m=+0.403704720 container init 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:35:13 np0005592157 podman[88697]: 2026-01-22 13:35:13.29840398 +0000 UTC m=+0.408869836 container start 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 08:35:13 np0005592157 podman[88697]: 2026-01-22 13:35:13.347436964 +0000 UTC m=+0.457902850 container attach 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 08:35:13 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:35:13 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:35:13 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 22 08:35:13 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 22 08:35:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 22 08:35:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 08:35:13 np0005592157 thirsty_agnesi[88712]: set ssl_option
Jan 22 08:35:14 np0005592157 systemd[1]: libpod-544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538.scope: Deactivated successfully.
Jan 22 08:35:14 np0005592157 podman[88737]: 2026-01-22 13:35:14.042773352 +0000 UTC m=+0.025786594 container died 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:35:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bb3a887b4b744a702ec6e5ae2fd723394cd15d8608117b944206ac7d4db9f24e-merged.mount: Deactivated successfully.
Jan 22 08:35:14 np0005592157 podman[88737]: 2026-01-22 13:35:14.084859105 +0000 UTC m=+0.067872247 container remove 544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538 (image=quay.io/ceph/ceph:v18, name=thirsty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:35:14 np0005592157 systemd[1]: libpod-conmon-544a5dc3dec4b193807c25142c796badd1c7a9470ad33aacbe9e167b25bca538.scope: Deactivated successfully.
Jan 22 08:35:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v98: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:14 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 08:35:14 np0005592157 python3[88777]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:14 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 22 08:35:14 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 22 08:35:14 np0005592157 podman[88778]: 2026-01-22 13:35:14.519195906 +0000 UTC m=+0.042952205 container create 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:14 np0005592157 systemd[1]: Started libpod-conmon-2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e.scope.
Jan 22 08:35:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d529e195f96e0208d0c27c63c5d662f41b207d7d3b5fb1e923dabdcc332b596/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d529e195f96e0208d0c27c63c5d662f41b207d7d3b5fb1e923dabdcc332b596/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d529e195f96e0208d0c27c63c5d662f41b207d7d3b5fb1e923dabdcc332b596/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:14 np0005592157 podman[88778]: 2026-01-22 13:35:14.501162504 +0000 UTC m=+0.024918823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:14 np0005592157 podman[88778]: 2026-01-22 13:35:14.601590839 +0000 UTC m=+0.125347158 container init 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 22 08:35:14 np0005592157 podman[88778]: 2026-01-22 13:35:14.610296791 +0000 UTC m=+0.134053090 container start 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:35:14 np0005592157 podman[88778]: 2026-01-22 13:35:14.613815678 +0000 UTC m=+0.137571977 container attach 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:35:15 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14237 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:35:15 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:15 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:15 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 22 08:35:15 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:15 np0005592157 magical_jemison[88793]: Scheduled rgw.rgw update...
Jan 22 08:35:15 np0005592157 magical_jemison[88793]: Scheduled ingress.rgw.default update...
Jan 22 08:35:15 np0005592157 systemd[1]: libpod-2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e.scope: Deactivated successfully.
Jan 22 08:35:15 np0005592157 podman[88778]: 2026-01-22 13:35:15.24699653 +0000 UTC m=+0.770752829 container died 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3d529e195f96e0208d0c27c63c5d662f41b207d7d3b5fb1e923dabdcc332b596-merged.mount: Deactivated successfully.
Jan 22 08:35:15 np0005592157 podman[88778]: 2026-01-22 13:35:15.297201382 +0000 UTC m=+0.820957681 container remove 2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e (image=quay.io/ceph/ceph:v18, name=magical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 22 08:35:15 np0005592157 systemd[1]: libpod-conmon-2ea79427ea76e971b544aa20162888e861f733b4687584546882a875f230af0e.scope: Deactivated successfully.
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v99: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:16 np0005592157 ceph-mon[74359]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:16 np0005592157 ceph-mon[74359]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:16 np0005592157 python3[88906]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:35:16 np0005592157 python3[88977]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088916.1762807-37460-77987194131631/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:35:17 np0005592157 python3[89027]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:17 np0005592157 podman[89028]: 2026-01-22 13:35:17.417318832 +0000 UTC m=+0.046747638 container create e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 08:35:17 np0005592157 systemd[1]: Started libpod-conmon-e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c.scope.
Jan 22 08:35:17 np0005592157 podman[89028]: 2026-01-22 13:35:17.398415158 +0000 UTC m=+0.027843974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 22 08:35:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 22 08:35:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f80c618cf8e53b069d501ffb840ff722ea2e9eb5bd31bc2661d43238822237/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f80c618cf8e53b069d501ffb840ff722ea2e9eb5bd31bc2661d43238822237/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f80c618cf8e53b069d501ffb840ff722ea2e9eb5bd31bc2661d43238822237/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:17 np0005592157 podman[89028]: 2026-01-22 13:35:17.574538331 +0000 UTC m=+0.203967167 container init e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:17 np0005592157 podman[89028]: 2026-01-22 13:35:17.583136152 +0000 UTC m=+0.212564948 container start e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:17 np0005592157 podman[89028]: 2026-01-22 13:35:17.588402411 +0000 UTC m=+0.217831217 container attach e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14239 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 08:35:18 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:18.162+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e2 new map
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v101: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:18 np0005592157 ceph-mgr[74655]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 22 08:35:18 np0005592157 systemd[1]: libpod-e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c.scope: Deactivated successfully.
Jan 22 08:35:18 np0005592157 podman[89028]: 2026-01-22 13:35:18.211914465 +0000 UTC m=+0.841343271 container died e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:35:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-96f80c618cf8e53b069d501ffb840ff722ea2e9eb5bd31bc2661d43238822237-merged.mount: Deactivated successfully.
Jan 22 08:35:18 np0005592157 podman[89028]: 2026-01-22 13:35:18.300961121 +0000 UTC m=+0.930389927 container remove e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c (image=quay.io/ceph/ceph:v18, name=clever_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:18 np0005592157 systemd[1]: libpod-conmon-e2bc78da53c79f4cba00ae21887e751eb34a6cfeeac0989864b71e87b482917c.scope: Deactivated successfully.
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 08:35:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Jan 22 08:35:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Jan 22 08:35:18 np0005592157 python3[89105]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:18 np0005592157 podman[89106]: 2026-01-22 13:35:18.694391798 +0000 UTC m=+0.026888841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:18 np0005592157 podman[89106]: 2026-01-22 13:35:18.876498478 +0000 UTC m=+0.208995531 container create 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:19 np0005592157 systemd[1]: Started libpod-conmon-8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3.scope.
Jan 22 08:35:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fb568a4fbc5c82f985fc223ed0267fc3820ca56e74b1089bdf529b127d070b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fb568a4fbc5c82f985fc223ed0267fc3820ca56e74b1089bdf529b127d070b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fb568a4fbc5c82f985fc223ed0267fc3820ca56e74b1089bdf529b127d070b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:19 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Jan 22 08:35:19 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Jan 22 08:35:19 np0005592157 podman[89106]: 2026-01-22 13:35:19.563713766 +0000 UTC m=+0.896210789 container init 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 08:35:19 np0005592157 podman[89106]: 2026-01-22 13:35:19.569275033 +0000 UTC m=+0.901772046 container start 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:35:19 np0005592157 podman[89106]: 2026-01-22 13:35:19.780371014 +0000 UTC m=+1.112868037 container attach 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:19 np0005592157 ceph-mon[74359]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v102: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 bold_hugle[89121]: Scheduled mds.cephfs update...
Jan 22 08:35:20 np0005592157 systemd[1]: libpod-8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3.scope: Deactivated successfully.
Jan 22 08:35:20 np0005592157 podman[89146]: 2026-01-22 13:35:20.409488556 +0000 UTC m=+0.029169117 container died 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:35:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6fb568a4fbc5c82f985fc223ed0267fc3820ca56e74b1089bdf529b127d070b5-merged.mount: Deactivated successfully.
Jan 22 08:35:20 np0005592157 podman[89146]: 2026-01-22 13:35:20.460917619 +0000 UTC m=+0.080598100 container remove 8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3 (image=quay.io/ceph/ceph:v18, name=bold_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:35:20 np0005592157 systemd[1]: libpod-conmon-8b391787ed86d4c2854f5953517907bad06b74b32b7df7d3fff887484cea43b3.scope: Deactivated successfully.
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:20 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 22 08:35:20 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:35:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:35:20 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:35:21 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 22 08:35:21 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 22 08:35:21 np0005592157 python3[89239]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:35:21 np0005592157 ceph-mon[74359]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:35:22 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:35:22 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:35:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v103: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:22 np0005592157 python3[89312]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088921.498755-37512-203307455506088/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:35:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:22 np0005592157 python3[89362]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:22 np0005592157 ceph-mon[74359]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:35:22 np0005592157 podman[89363]: 2026-01-22 13:35:22.974431664 +0000 UTC m=+0.062520645 container create 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:35:23 np0005592157 systemd[1]: Started libpod-conmon-44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc.scope.
Jan 22 08:35:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467a2eb810c2717d5feab6af3aeb796726c6236b01cfa544f77ebc94b0b869b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467a2eb810c2717d5feab6af3aeb796726c6236b01cfa544f77ebc94b0b869b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:23 np0005592157 podman[89363]: 2026-01-22 13:35:23.042910285 +0000 UTC m=+0.130999276 container init 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:35:23 np0005592157 podman[89363]: 2026-01-22 13:35:22.952503456 +0000 UTC m=+0.040592447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:23 np0005592157 podman[89363]: 2026-01-22 13:35:23.051157978 +0000 UTC m=+0.139246949 container start 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:35:23 np0005592157 podman[89363]: 2026-01-22 13:35:23.054732875 +0000 UTC m=+0.142821886 container attach 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:35:23 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:35:23 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:35:23 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 22 08:35:23 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 22 08:35:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 22 08:35:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 08:35:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 08:35:23 np0005592157 systemd[1]: libpod-44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc.scope: Deactivated successfully.
Jan 22 08:35:23 np0005592157 podman[89403]: 2026-01-22 13:35:23.98299057 +0000 UTC m=+0.030912739 container died 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:35:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v104: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:24 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:35:24 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:35:24 np0005592157 ceph-mon[74359]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:35:24 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 08:35:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Jan 22 08:35:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Jan 22 08:35:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-467a2eb810c2717d5feab6af3aeb796726c6236b01cfa544f77ebc94b0b869b6-merged.mount: Deactivated successfully.
Jan 22 08:35:24 np0005592157 podman[89403]: 2026-01-22 13:35:24.615959457 +0000 UTC m=+0.663881646 container remove 44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc (image=quay.io/ceph/ceph:v18, name=zealous_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:24 np0005592157 systemd[1]: libpod-conmon-44eebe962b3a884f638d3409fe76db0ecf2e8f3830bc63202e8a0c5e57b403fc.scope: Deactivated successfully.
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v105: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:25 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 38e7ff98-c425-4f0f-830c-2195f3d18bb4 (Updating mon deployment (+2 -> 3))
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:25 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 22 08:35:25 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 22 08:35:25 np0005592157 python3[89443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Jan 22 08:35:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Jan 22 08:35:25 np0005592157 podman[89445]: 2026-01-22 13:35:25.580270046 +0000 UTC m=+0.023187361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:25 np0005592157 podman[89445]: 2026-01-22 13:35:25.735687709 +0000 UTC m=+0.178605014 container create f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:25 np0005592157 systemd[1]: Started libpod-conmon-f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83.scope.
Jan 22 08:35:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad7e11a22e774f6410de9a3e03656e4b4a47ae7402773be174c7049a5bfb07/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad7e11a22e774f6410de9a3e03656e4b4a47ae7402773be174c7049a5bfb07/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:26 np0005592157 podman[89445]: 2026-01-22 13:35:26.220415458 +0000 UTC m=+0.663332803 container init f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:35:26 np0005592157 podman[89445]: 2026-01-22 13:35:26.22862897 +0000 UTC m=+0.671546265 container start f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:35:26 np0005592157 podman[89445]: 2026-01-22 13:35:26.480510113 +0000 UTC m=+0.923427438 container attach f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: Deploying daemon mon.compute-2 on compute-2
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:35:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935446327' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:35:26 np0005592157 tender_keller[89461]: 
Jan 22 08:35:26 np0005592157 tender_keller[89461]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":209,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":33,"num_osds":2,"num_up_osds":2,"osd_up_since":1769088883,"num_in_osds":2,"osd_in_since":1769088859,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":69}],"num_pgs":69,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56053760,"bytes_avail":14967943168,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-22T13:33:48.260436+0000","services":{}},"progress_events":{}}
Jan 22 08:35:26 np0005592157 systemd[1]: libpod-f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83.scope: Deactivated successfully.
Jan 22 08:35:26 np0005592157 podman[89445]: 2026-01-22 13:35:26.916483464 +0000 UTC m=+1.359400799 container died f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:35:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d6ad7e11a22e774f6410de9a3e03656e4b4a47ae7402773be174c7049a5bfb07-merged.mount: Deactivated successfully.
Jan 22 08:35:27 np0005592157 podman[89445]: 2026-01-22 13:35:27.096258657 +0000 UTC m=+1.539175942 container remove f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83 (image=quay.io/ceph/ceph:v18, name=tender_keller, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:27 np0005592157 systemd[1]: libpod-conmon-f2ccd5786a94cf3d3a8f940420d9119aaebc0350bc90f5b062e950ce67a6ce83.scope: Deactivated successfully.
Jan 22 08:35:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v106: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:27 np0005592157 ceph-mon[74359]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 22 08:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v107: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:30 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:30 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:30 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Jan 22 08:35:30 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Jan 22 08:35:31 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:31 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v108: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:31 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:31 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 08:35:32 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:32 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:32 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:32 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 08:35:33 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v109: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:33 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 08:35:34 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:34 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 22 08:35:34 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:34 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 08:35:35 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v110: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:35 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 08:35:36 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3663533859; not ready for session (expect reconnect)
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:36.305+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:36.307+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:36.307+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:36.307+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:36 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:36.307+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 22 08:35:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 08:35:36 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:37 np0005592157 ceph-mgr[74655]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 22 08:35:37 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:35:37.105+0000 7fdf474f5640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 22 08:35:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 22 08:35:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 22 08:35:37 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:37 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:38 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:38 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:39 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 22 08:35:39 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 22 08:35:39 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 22 08:35:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 22 08:35:40 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:40 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:41 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:41.563+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:41.563+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:41 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:41.563+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:41 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:41.563+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0[74355]: 2026-01-22T13:35:41.563+0000 7f1b95921640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 38e7ff98-c425-4f0f-830c-2195f3d18bb4 (Updating mon deployment (+2 -> 3))
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 38e7ff98-c425-4f0f-830c-2195f3d18bb4 (Updating mon deployment (+2 -> 3)) in 16 seconds
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev d4abdb3e-0362-48c2-b0f4-3db49df51618 (Updating mgr deployment (+2 -> 3))
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0 calling monitor election
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-2 calling monitor election
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-1 calling monitor election
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:41 np0005592157 ceph-mon[74359]:    fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592157 ceph-mon[74359]:    fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:41 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1290387339; not ready for session (expect reconnect)
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 08:35:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 08:35:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:42 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 5 completed events
Jan 22 08:35:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:35:42 np0005592157 ceph-mgr[74655]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 22 08:35:42 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T13:35:42.873+0000 7fdf474f5640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v114: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:43 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 08:35:43 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:44 np0005592157 ceph-mon[74359]: Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 08:35:44 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 22 08:35:44 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev d4abdb3e-0362-48c2-b0f4-3db49df51618 (Updating mgr deployment (+2 -> 3))
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event d4abdb3e-0362-48c2-b0f4-3db49df51618 (Updating mgr deployment (+2 -> 3)) in 4 seconds
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 23eb5cc6-b164-4ccd-a311-335ee00e9fe2 (Updating crash deployment (+1 -> 3))
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 22 08:35:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v115: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:35:45 np0005592157 ceph-mon[74359]: Deploying daemon crash.compute-2 on compute-2
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:35:46
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', 'images', '.mgr']
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 08:35:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 08:35:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v116: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:47 np0005592157 python3[89523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:47 np0005592157 podman[89525]: 2026-01-22 13:35:47.514111064 +0000 UTC m=+0.081969063 container create 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:47 np0005592157 systemd[1]: Started libpod-conmon-3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448.scope.
Jan 22 08:35:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:47 np0005592157 podman[89525]: 2026-01-22 13:35:47.484954938 +0000 UTC m=+0.052813037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c314637843c1d2cee131d7f06dd9f72fcf66baf4cbd16a13ef6b8c905d27c3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c314637843c1d2cee131d7f06dd9f72fcf66baf4cbd16a13ef6b8c905d27c3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:47 np0005592157 podman[89525]: 2026-01-22 13:35:47.600118265 +0000 UTC m=+0.167976284 container init 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:47 np0005592157 podman[89525]: 2026-01-22 13:35:47.611785132 +0000 UTC m=+0.179643131 container start 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:47 np0005592157 podman[89525]: 2026-01-22 13:35:47.615477622 +0000 UTC m=+0.183335621 container attach 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143486171' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:35:48 np0005592157 keen_merkle[89541]: 
Jan 22 08:35:48 np0005592157 keen_merkle[89541]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":6,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":33,"num_osds":2,"num_up_osds":2,"osd_up_since":1769088883,"num_in_osds":2,"osd_in_since":1769088859,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":69}],"num_pgs":69,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56053760,"bytes_avail":14967943168,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-22T13:35:45.386856+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"23eb5cc6-b164-4ccd-a311-335ee00e9fe2":{"message":"Updating crash deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 22 08:35:48 np0005592157 systemd[1]: libpod-3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448.scope: Deactivated successfully.
Jan 22 08:35:48 np0005592157 podman[89525]: 2026-01-22 13:35:48.267445305 +0000 UTC m=+0.835303314 container died 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0c314637843c1d2cee131d7f06dd9f72fcf66baf4cbd16a13ef6b8c905d27c3d-merged.mount: Deactivated successfully.
Jan 22 08:35:48 np0005592157 podman[89525]: 2026-01-22 13:35:48.322972118 +0000 UTC m=+0.890830157 container remove 3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448 (image=quay.io/ceph/ceph:v18, name=keen_merkle, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:48 np0005592157 systemd[1]: libpod-conmon-3b41c9ec0f0fa6c3460e7f8896cbdcf6ff231a55405fe4cfd63044436d957448.scope: Deactivated successfully.
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 22 08:35:48 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 6 completed events
Jan 22 08:35:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:35:48 np0005592157 python3[89603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:48 np0005592157 podman[89604]: 2026-01-22 13:35:48.768511844 +0000 UTC m=+0.056111378 container create 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:48 np0005592157 systemd[1]: Started libpod-conmon-57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0.scope.
Jan 22 08:35:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c0668b527ee61bc8bd28c4d25c81f680b623109fc25b53509759a630e2d62c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c0668b527ee61bc8bd28c4d25c81f680b623109fc25b53509759a630e2d62c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:48 np0005592157 podman[89604]: 2026-01-22 13:35:48.737473053 +0000 UTC m=+0.025072667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:48 np0005592157 podman[89604]: 2026-01-22 13:35:48.841065255 +0000 UTC m=+0.128664809 container init 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:48 np0005592157 podman[89604]: 2026-01-22 13:35:48.845774511 +0000 UTC m=+0.133374045 container start 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:48 np0005592157 podman[89604]: 2026-01-22 13:35:48.850008055 +0000 UTC m=+0.137607609 container attach 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v117: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710829164' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 08:35:49 np0005592157 quirky_sanderson[89621]: 
Jan 22 08:35:49 np0005592157 quirky_sanderson[89621]: {"epoch":3,"fsid":"088fe176-0106-5401-803c-2da38b73b76a","modified":"2026-01-22T13:35:36.312090Z","created":"2026-01-22T13:31:51.987775Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 22 08:35:49 np0005592157 quirky_sanderson[89621]: dumped monmap epoch 3
Jan 22 08:35:49 np0005592157 systemd[1]: libpod-57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0.scope: Deactivated successfully.
Jan 22 08:35:49 np0005592157 podman[89604]: 2026-01-22 13:35:49.470373752 +0000 UTC m=+0.757973366 container died 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e9c0668b527ee61bc8bd28c4d25c81f680b623109fc25b53509759a630e2d62c-merged.mount: Deactivated successfully.
Jan 22 08:35:49 np0005592157 podman[89604]: 2026-01-22 13:35:49.523061676 +0000 UTC m=+0.810661210 container remove 57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0 (image=quay.io/ceph/ceph:v18, name=quirky_sanderson, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:49 np0005592157 systemd[1]: libpod-conmon-57386adbb2476cd86fdf8fe9de102d89405fb436aa6bf39665b3a5859ed993c0.scope: Deactivated successfully.
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Jan 22 08:35:49 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 7b1150eb-a1c4-4d1c-bc41-7a36088a4e1c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:35:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:50 np0005592157 python3[89682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:50 np0005592157 podman[89683]: 2026-01-22 13:35:50.271869706 +0000 UTC m=+0.053285029 container create 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:35:50 np0005592157 systemd[1]: Started libpod-conmon-9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad.scope.
Jan 22 08:35:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:50 np0005592157 podman[89683]: 2026-01-22 13:35:50.249038125 +0000 UTC m=+0.030453438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d855261edd65ca775817fb790c3d3d1ef9c383284c953c33c381fb3f9c22480/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d855261edd65ca775817fb790c3d3d1ef9c383284c953c33c381fb3f9c22480/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:50 np0005592157 podman[89683]: 2026-01-22 13:35:50.36330748 +0000 UTC m=+0.144722783 container init 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:35:50 np0005592157 podman[89683]: 2026-01-22 13:35:50.369559134 +0000 UTC m=+0.150974417 container start 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:35:50 np0005592157 podman[89683]: 2026-01-22 13:35:50.386444038 +0000 UTC m=+0.167859351 container attach 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 08:35:51 np0005592157 gifted_bardeen[89698]: [client.openstack]
Jan 22 08:35:51 np0005592157 gifted_bardeen[89698]: #011key = AQCZJnJpAAAAABAAqtkA7doM+5EIMhShr22e9w==
Jan 22 08:35:51 np0005592157 gifted_bardeen[89698]: #011caps mgr = "allow *"
Jan 22 08:35:51 np0005592157 gifted_bardeen[89698]: #011caps mon = "profile rbd"
Jan 22 08:35:51 np0005592157 gifted_bardeen[89698]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 22 08:35:51 np0005592157 systemd[1]: libpod-9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad.scope: Deactivated successfully.
Jan 22 08:35:51 np0005592157 podman[89683]: 2026-01-22 13:35:51.028086817 +0000 UTC m=+0.809502120 container died 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1d855261edd65ca775817fb790c3d3d1ef9c383284c953c33c381fb3f9c22480-merged.mount: Deactivated successfully.
Jan 22 08:35:51 np0005592157 podman[89683]: 2026-01-22 13:35:51.075607443 +0000 UTC m=+0.857022736 container remove 9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad (image=quay.io/ceph/ceph:v18, name=gifted_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:51 np0005592157 systemd[1]: libpod-conmon-9739b6732898d0b2efc899f7e7336129a4ed818d57e557581bf540197063a8ad.scope: Deactivated successfully.
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v119: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:52 np0005592157 ansible-async_wrapper.py[89885]: Invoked with j926497256384 30 /home/zuul/.ansible/tmp/ansible-tmp-1769088952.1413822-37585-18438134152859/AnsiballZ_command.py _
Jan 22 08:35:52 np0005592157 ansible-async_wrapper.py[89888]: Starting module and watcher
Jan 22 08:35:52 np0005592157 ansible-async_wrapper.py[89888]: Start watching 89889 (30)
Jan 22 08:35:52 np0005592157 ansible-async_wrapper.py[89889]: Start module (89889)
Jan 22 08:35:52 np0005592157 ansible-async_wrapper.py[89885]: Return async_wrapper task started.
Jan 22 08:35:52 np0005592157 python3[89890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:52 np0005592157 podman[89891]: 2026-01-22 13:35:52.883255003 +0000 UTC m=+0.056234201 container create de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:35:52 np0005592157 systemd[1]: Started libpod-conmon-de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb.scope.
Jan 22 08:35:52 np0005592157 podman[89891]: 2026-01-22 13:35:52.854840006 +0000 UTC m=+0.027819264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6d0d405f4ca9dfe303e46550c9bbe249cc4771f730f168b3910be193fadb8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6d0d405f4ca9dfe303e46550c9bbe249cc4771f730f168b3910be193fadb8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:52 np0005592157 podman[89891]: 2026-01-22 13:35:52.974600376 +0000 UTC m=+0.147579564 container init de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:52 np0005592157 podman[89891]: 2026-01-22 13:35:52.981850063 +0000 UTC m=+0.154829241 container start de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:52 np0005592157 podman[89891]: 2026-01-22 13:35:52.986019926 +0000 UTC m=+0.158999114 container attach de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Jan 22 08:35:53 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 6b65ed91-d881-41d6-b226-c583acef6bc7 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:35:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v121: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:53 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:35:53 np0005592157 sharp_babbage[89906]: 
Jan 22 08:35:53 np0005592157 sharp_babbage[89906]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 08:35:53 np0005592157 systemd[1]: libpod-de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb.scope: Deactivated successfully.
Jan 22 08:35:53 np0005592157 podman[89891]: 2026-01-22 13:35:53.592165784 +0000 UTC m=+0.765144982 container died de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:35:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7d6d0d405f4ca9dfe303e46550c9bbe249cc4771f730f168b3910be193fadb8e-merged.mount: Deactivated successfully.
Jan 22 08:35:53 np0005592157 podman[89891]: 2026-01-22 13:35:53.637430875 +0000 UTC m=+0.810410033 container remove de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb (image=quay.io/ceph/ceph:v18, name=sharp_babbage, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:53 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Jan 22 08:35:53 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Jan 22 08:35:53 np0005592157 systemd[1]: libpod-conmon-de6305219235aefbcf8597e3b76b1190d6905587909129213640a7961a72b8cb.scope: Deactivated successfully.
Jan 22 08:35:53 np0005592157 ansible-async_wrapper.py[89889]: Module complete (89889)
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 08:35:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:53 np0005592157 python3[89990]: ansible-ansible.legacy.async_status Invoked with jid=j926497256384.89885 mode=status _async_dir=/root/.ansible_async
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 23eb5cc6-b164-4ccd-a311-335ee00e9fe2 (Updating crash deployment (+1 -> 3))
Jan 22 08:35:54 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 23eb5cc6-b164-4ccd-a311-335ee00e9fe2 (Updating crash deployment (+1 -> 3)) in 9 seconds
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 22 08:35:54 np0005592157 python3[90084]: ansible-ansible.legacy.async_status Invoked with jid=j926497256384.89885 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Jan 22 08:35:54 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev cc14ec1a-0184-47bc-a80d-7f5773399774 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36 pruub=9.012537003s) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active pruub 91.921409607s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:54 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=36 pruub=14.962036133s) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active pruub 97.870964050s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:54 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=36 pruub=14.962036133s) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown pruub 97.870964050s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:54 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36 pruub=9.012537003s) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown pruub 91.921409607s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.650658795 +0000 UTC m=+0.043996651 container create ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:54 np0005592157 systemd[1]: Started libpod-conmon-ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934.scope.
Jan 22 08:35:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.633736009 +0000 UTC m=+0.027073885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.739126876 +0000 UTC m=+0.132464732 container init ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.749111241 +0000 UTC m=+0.142449097 container start ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:35:54 np0005592157 upbeat_neumann[90196]: 167 167
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.753266913 +0000 UTC m=+0.146604769 container attach ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:35:54 np0005592157 systemd[1]: libpod-ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934.scope: Deactivated successfully.
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.754786721 +0000 UTC m=+0.148124577 container died ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:35:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4b7d3671be2ed5f96feeedcc22442e8a4dec764af84b5b1b41bb57017c2e3d0a-merged.mount: Deactivated successfully.
Jan 22 08:35:54 np0005592157 podman[90180]: 2026-01-22 13:35:54.797107129 +0000 UTC m=+0.190444985 container remove ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_neumann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:54 np0005592157 systemd[1]: libpod-conmon-ff92e5f1e83f6494b0a3a248dc9f4ffbcfa93593e1d951968d3554bdf979f934.scope: Deactivated successfully.
Jan 22 08:35:54 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 7 completed events
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 22 08:35:54 np0005592157 podman[90246]: 2026-01-22 13:35:54.966454016 +0000 UTC m=+0.045855967 container create e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:54 np0005592157 python3[90240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:55 np0005592157 systemd[1]: Started libpod-conmon-e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46.scope.
Jan 22 08:35:55 np0005592157 podman[90260]: 2026-01-22 13:35:55.042452261 +0000 UTC m=+0.047045946 container create 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 08:35:55 np0005592157 podman[90246]: 2026-01-22 13:35:54.94828089 +0000 UTC m=+0.027682871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 systemd[1]: Started libpod-conmon-1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75.scope.
Jan 22 08:35:55 np0005592157 podman[90246]: 2026-01-22 13:35:55.083713244 +0000 UTC m=+0.163115225 container init e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7790e7c8f72922faf195ab1ced37991834217a5cdcccc89d3cfec975406a0268/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7790e7c8f72922faf195ab1ced37991834217a5cdcccc89d3cfec975406a0268/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592157 podman[90246]: 2026-01-22 13:35:55.099880761 +0000 UTC m=+0.179282712 container start e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:55 np0005592157 podman[90246]: 2026-01-22 13:35:55.104067164 +0000 UTC m=+0.183469115 container attach e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:55 np0005592157 podman[90260]: 2026-01-22 13:35:55.108880802 +0000 UTC m=+0.113474477 container init 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:35:55 np0005592157 podman[90260]: 2026-01-22 13:35:55.114388527 +0000 UTC m=+0.118982202 container start 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:55 np0005592157 podman[90260]: 2026-01-22 13:35:55.021066406 +0000 UTC m=+0.025660171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:55 np0005592157 podman[90260]: 2026-01-22 13:35:55.118568729 +0000 UTC m=+0.123162434 container attach 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v123: 131 pgs: 2 peering, 62 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e37 e37: 2 total, 2 up, 2 in
Jan 22 08:35:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e37: 2 total, 2 up, 2 in
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 051fe5b4-e88f-4781-82e9-023c99165107 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 7b1150eb-a1c4-4d1c-bc41-7a36088a4e1c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 7b1150eb-a1c4-4d1c-bc41-7a36088a4e1c (PG autoscaler increasing pool 4 PGs from 1 to 32) in 6 seconds
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 6b65ed91-d881-41d6-b226-c583acef6bc7 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 6b65ed91-d881-41d6-b226-c583acef6bc7 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev cc14ec1a-0184-47bc-a80d-7f5773399774 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event cc14ec1a-0184-47bc-a80d-7f5773399774 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 051fe5b4-e88f-4781-82e9-023c99165107 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 051fe5b4-e88f-4781-82e9-023c99165107 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1c( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1a( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1f( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.11( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.12( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.17( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.9( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.b( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.a( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.d( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.6( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.7( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.4( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.2( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=10.833010674s) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 94.761421204s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.e( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1d( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.18( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.19( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=10.833010674s) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown pruub 94.761421204s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.8( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.c( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.10( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.13( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.15( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.16( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1b( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=18/19 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.f( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1c( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1a( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.11( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.12( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.17( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.9( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.a( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=36/37 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.0( empty local-lis/les=36/37 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.6( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.7( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.2( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.e( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.4( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.18( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.19( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.8( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.c( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.10( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.13( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.16( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.15( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.1b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[5.f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=18/18 les/c/f=19/19/0 sis=36) [0] r=0 lpr=36 pi=[18,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:55 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:35:55 np0005592157 hopeful_antonelli[90280]: 
Jan 22 08:35:55 np0005592157 hopeful_antonelli[90280]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 08:35:55 np0005592157 systemd[1]: libpod-1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75.scope: Deactivated successfully.
Jan 22 08:35:55 np0005592157 podman[90310]: 2026-01-22 13:35:55.828843284 +0000 UTC m=+0.031371421 container died 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7790e7c8f72922faf195ab1ced37991834217a5cdcccc89d3cfec975406a0268-merged.mount: Deactivated successfully.
Jan 22 08:35:55 np0005592157 podman[90310]: 2026-01-22 13:35:55.881608499 +0000 UTC m=+0.084136616 container remove 1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75 (image=quay.io/ceph/ceph:v18, name=hopeful_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:55 np0005592157 systemd[1]: libpod-conmon-1bfdd1d498baf7de68ce0b4e005ef7201402c42223a820a025af8667f789fb75.scope: Deactivated successfully.
Jan 22 08:35:55 np0005592157 practical_dirac[90275]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:35:55 np0005592157 practical_dirac[90275]: --> relative data size: 1.0
Jan 22 08:35:55 np0005592157 practical_dirac[90275]: --> All data devices are unavailable
Jan 22 08:35:55 np0005592157 systemd[1]: libpod-e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46.scope: Deactivated successfully.
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 08:35:56 np0005592157 podman[90332]: 2026-01-22 13:35:56.202966697 +0000 UTC m=+0.239078829 container died e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:35:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4ebd75ad923ccf82bdb1634de99d73cef836a9f83652ff0d72e7e238e4034eb2-merged.mount: Deactivated successfully.
Jan 22 08:35:56 np0005592157 podman[90332]: 2026-01-22 13:35:56.246989438 +0000 UTC m=+0.283101550 container remove e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dirac, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:56 np0005592157 systemd[1]: libpod-conmon-e74a35efeccd0912f1506baa7480a1482296927aaa12342fb2760956d4464c46.scope: Deactivated successfully.
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e38 e38: 2 total, 2 up, 2 in
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e38: 2 total, 2 up, 2 in
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=37/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [0] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"} v 0) v1
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:35:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:35:56 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:35:56 np0005592157 python3[90472]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:56 np0005592157 podman[90506]: 2026-01-22 13:35:56.887440898 +0000 UTC m=+0.027881955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:57 np0005592157 podman[90506]: 2026-01-22 13:35:57.240863263 +0000 UTC m=+0.381304340 container create cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.29289248 +0000 UTC m=+0.390240550 container create abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:35:57 np0005592157 systemd[1]: Started libpod-conmon-cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e.scope.
Jan 22 08:35:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:57 np0005592157 systemd[1]: Started libpod-conmon-abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e.scope.
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f28588b87ae32ee2846f8cf31438e56ad7108b4ceab5e459f4dd37ed18aaa2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f28588b87ae32ee2846f8cf31438e56ad7108b4ceab5e459f4dd37ed18aaa2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 podman[90506]: 2026-01-22 13:35:57.35073427 +0000 UTC m=+0.491175357 container init cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.26107607 +0000 UTC m=+0.358424220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:57 np0005592157 podman[90506]: 2026-01-22 13:35:57.358524421 +0000 UTC m=+0.498965478 container start cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 08:35:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:57 np0005592157 podman[90506]: 2026-01-22 13:35:57.363288748 +0000 UTC m=+0.503729825 container attach cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.372835123 +0000 UTC m=+0.470183223 container init abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.379594659 +0000 UTC m=+0.476942729 container start abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:57 np0005592157 infallible_khorana[90546]: 167 167
Jan 22 08:35:57 np0005592157 systemd[1]: libpod-abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e.scope: Deactivated successfully.
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.385236107 +0000 UTC m=+0.482584177 container attach abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.386401516 +0000 UTC m=+0.483749586 container died abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:35:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v127: 146 pgs: 2 peering, 77 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4e52a4eeab3521881ab56cedd0c71d5abf10c05c0c375a182b3cb703c72945da-merged.mount: Deactivated successfully.
Jan 22 08:35:57 np0005592157 podman[90523]: 2026-01-22 13:35:57.424975592 +0000 UTC m=+0.522323662 container remove abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:35:57 np0005592157 systemd[1]: libpod-conmon-abb4389a373813c91218d0fc1ec5a48ce9fb01d2544350b3ddd95a0f0bd68a0e.scope: Deactivated successfully.
Jan 22 08:35:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:57 np0005592157 podman[90572]: 2026-01-22 13:35:57.621088655 +0000 UTC m=+0.058965638 container create bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:57 np0005592157 systemd[1]: Started libpod-conmon-bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683.scope.
Jan 22 08:35:57 np0005592157 ansible-async_wrapper.py[89888]: Done in kid B.
Jan 22 08:35:57 np0005592157 podman[90572]: 2026-01-22 13:35:57.60172892 +0000 UTC m=+0.039605913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5830264217cdc0399a0788eac4d426bad217215f5221a727e88321d4da397fd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5830264217cdc0399a0788eac4d426bad217215f5221a727e88321d4da397fd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5830264217cdc0399a0788eac4d426bad217215f5221a727e88321d4da397fd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5830264217cdc0399a0788eac4d426bad217215f5221a727e88321d4da397fd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:57 np0005592157 podman[90572]: 2026-01-22 13:35:57.743820698 +0000 UTC m=+0.181697761 container init bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:35:57 np0005592157 podman[90572]: 2026-01-22 13:35:57.756701144 +0000 UTC m=+0.194578127 container start bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:57 np0005592157 podman[90572]: 2026-01-22 13:35:57.761057931 +0000 UTC m=+0.198934904 container attach bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:35:57 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:35:57 np0005592157 funny_chaplygin[90541]: 
Jan 22 08:35:57 np0005592157 funny_chaplygin[90541]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 22 08:35:58 np0005592157 systemd[1]: libpod-cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e.scope: Deactivated successfully.
Jan 22 08:35:58 np0005592157 podman[90506]: 2026-01-22 13:35:58.009977671 +0000 UTC m=+1.150418728 container died cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5f28588b87ae32ee2846f8cf31438e56ad7108b4ceab5e459f4dd37ed18aaa2-merged.mount: Deactivated successfully.
Jan 22 08:35:58 np0005592157 podman[90506]: 2026-01-22 13:35:58.063532455 +0000 UTC m=+1.203973492 container remove cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e (image=quay.io/ceph/ceph:v18, name=funny_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 22 08:35:58 np0005592157 systemd[1]: libpod-conmon-cd9774735eaa355e7a4c29c947e4a5ab31a54183d82415d51946036e6cd68f5e.scope: Deactivated successfully.
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]: {
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:    "0": [
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:        {
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "devices": [
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "/dev/loop3"
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            ],
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "lv_name": "ceph_lv0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "lv_size": "7511998464",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "name": "ceph_lv0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "tags": {
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.cluster_name": "ceph",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.crush_device_class": "",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.encrypted": "0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.osd_id": "0",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.type": "block",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:                "ceph.vdo": "0"
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            },
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "type": "block",
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:            "vg_name": "ceph_vg0"
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:        }
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]:    ]
Jan 22 08:35:58 np0005592157 lucid_diffie[90589]: }
Jan 22 08:35:58 np0005592157 systemd[1]: libpod-bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683.scope: Deactivated successfully.
Jan 22 08:35:58 np0005592157 podman[90631]: 2026-01-22 13:35:58.586393539 +0000 UTC m=+0.028520201 container died bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:35:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok
Jan 22 08:35:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5830264217cdc0399a0788eac4d426bad217215f5221a727e88321d4da397fd8-merged.mount: Deactivated successfully.
Jan 22 08:35:58 np0005592157 podman[90631]: 2026-01-22 13:35:58.656236624 +0000 UTC m=+0.098363236 container remove bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:58 np0005592157 systemd[1]: libpod-conmon-bd66cd422d887ac7b938672dd2fa9cc89ac87cb47ba81a6ef2e3da9cce13c683.scope: Deactivated successfully.
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:35:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:35:58 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:35:59 np0005592157 python3[90721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.086858864 +0000 UTC m=+0.043500489 container create cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:59 np0005592157 systemd[1]: Started libpod-conmon-cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb.scope.
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.067261053 +0000 UTC m=+0.023902708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:35:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107abac93138f1ec7fc811fd8fcf5155a3169263be0af64fb88fe2fb1ada4517/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107abac93138f1ec7fc811fd8fcf5155a3169263be0af64fb88fe2fb1ada4517/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.188524889 +0000 UTC m=+0.145166514 container init cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.199634892 +0000 UTC m=+0.156276517 container start cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.203865396 +0000 UTC m=+0.160507111 container attach cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.366517918 +0000 UTC m=+0.047637810 container create 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:35:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 2 peering, 108 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:35:59 np0005592157 systemd[1]: Started libpod-conmon-2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f.scope.
Jan 22 08:35:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.435466631 +0000 UTC m=+0.116586553 container init 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.342646612 +0000 UTC m=+0.023766524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.442428152 +0000 UTC m=+0.123548044 container start 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.446336248 +0000 UTC m=+0.127456190 container attach 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:59 np0005592157 thirsty_swanson[90848]: 167 167
Jan 22 08:35:59 np0005592157 systemd[1]: libpod-2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f.scope: Deactivated successfully.
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.448628454 +0000 UTC m=+0.129748346 container died 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-852713c97821a4fde3f434943f2ebda86ac69cc00f34c3f9b5018874cc5417dd-merged.mount: Deactivated successfully.
Jan 22 08:35:59 np0005592157 podman[90831]: 2026-01-22 13:35:59.501003519 +0000 UTC m=+0.182123411 container remove 2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:59 np0005592157 systemd[1]: libpod-conmon-2e4026937c216ac9ee81b9a5ad698e19bdce52c4ba6d5eeb11e1bddceffb085f.scope: Deactivated successfully.
Jan 22 08:35:59 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 08:35:59 np0005592157 podman[90889]: 2026-01-22 13:35:59.748750801 +0000 UTC m=+0.104550358 container create c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:59 np0005592157 gifted_feistel[90787]: 
Jan 22 08:35:59 np0005592157 gifted_feistel[90787]: [{"container_id": "451f24e807fd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.52%", "created": "2026-01-22T13:33:19.020133Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-22T13:33:19.065187Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T13:34:38.838691Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2026-01-22T13:33:18.798026Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@crash.compute-0", "version": "18.2.7"}, {"container_id": "50d1ea49dfe7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.44%", "created": "2026-01-22T13:34:16.706785Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-22T13:34:16.764313Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-22T13:34:37.805411Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2026-01-22T13:34:16.463472Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@crash.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-22T13:35:54.038351Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "crash", "status": 2, "status_desc": "starting"}, {"container_id": "db0fcc1ac1d4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "34.91%", "created": "2026-01-22T13:32:00.018187Z", "daemon_id": "compute-0.nyayzk", "daemon_name": "mgr.compute-0.nyayzk", "daemon_type": "mgr", "events": ["2026-01-22T13:33:25.734200Z daemon:mgr.compute-0.nyayzk [INFO] \"Reconfigured mgr.compute-0.nyayzk on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T13:34:38.838616Z", "memory_usage": 547356672, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-22T13:31:59.889916Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@mgr.compute-0.nyayzk", "version": "18.2.7"}, {"daemon_id": "compute-1.hzmatt", "daemon_name": "mgr.compute-1.hzmatt", "daemon_type": "mgr", "events": ["2026-01-22T13:35:45.281833Z daemon:mgr.compute-1.hzmatt [INFO] \"Deployed mgr.compute-1.hzmatt on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2.tjdsdx", "daemon_name": "mgr.compute-2.tjdsdx", "daemon_type": "mgr", "events": ["2026-01-22T13:35:43.522105Z daemon:mgr.compute-2.tjdsdx [INFO] \"Deployed mgr.compute-2.tjdsdx on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "07669b4a5faa", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.71%", "created": "2026-01-22T13:31:54.044039Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-22T13:33:24.906001Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T13:34:38.838512Z", "memory_request": 2147483648, "memory_usage": 32589742, "ports": [], "service_name": "mon", "started": "2026-01-22T13:31:56.815964Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@mon.compute-0", "version": "18.2.7"}, {"daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-22T13:35:41.578807Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2026-01-22T13:35:29.026748Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "447e358c079d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.41%", "created": "2026-01-22T13:34:30.033875Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-22T13:34:30.102419Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T13:34:38.838751Z", "memory_request": 4294967296, "memory_usage": 35987128, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-22T13:34:29.891623Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@osd.0", "version": "18.2.7"}, {"container_id": "a71bbb89b63e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.92%", "created": "2026-01-22T13:34:32.839223Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-22T13:34:32.897175Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-22T13:34:37.805538Z", "memory_request": 5502921113, "memory_usage": 31635537, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-22T13:34:32.561964Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-088fe176-0106-5401-803c-2da38b73b76a@osd.1", "version": "18.2.7"}]
Jan 22 08:35:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:59 np0005592157 podman[90889]: 2026-01-22 13:35:59.673419941 +0000 UTC m=+0.029219518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:59 np0005592157 systemd[1]: libpod-cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb.scope: Deactivated successfully.
Jan 22 08:35:59 np0005592157 podman[90772]: 2026-01-22 13:35:59.772313259 +0000 UTC m=+0.728954904 container died cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:35:59 np0005592157 systemd[1]: Started libpod-conmon-c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618.scope.
Jan 22 08:35:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-107abac93138f1ec7fc811fd8fcf5155a3169263be0af64fb88fe2fb1ada4517-merged.mount: Deactivated successfully.
Jan 22 08:35:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 22 08:35:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a93d296eb42da59291ef76e66206b05489a760e79801621aa7a9f8f3116e2ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a93d296eb42da59291ef76e66206b05489a760e79801621aa7a9f8f3116e2ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a93d296eb42da59291ef76e66206b05489a760e79801621aa7a9f8f3116e2ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a93d296eb42da59291ef76e66206b05489a760e79801621aa7a9f8f3116e2ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:59 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 11 completed events
Jan 22 08:35:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:36:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e41 e41: 3 total, 2 up, 3 in
Jan 22 08:36:00 np0005592157 podman[90772]: 2026-01-22 13:36:00.050561079 +0000 UTC m=+1.007202754 container remove cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 2 up, 3 in
Jan 22 08:36:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:00 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:00 np0005592157 systemd[1]: libpod-conmon-cd126b932efa23d78d0658a221d783986ad3e9b0435b3c821fde1537777298eb.scope: Deactivated successfully.
Jan 22 08:36:00 np0005592157 podman[90889]: 2026-01-22 13:36:00.104066462 +0000 UTC m=+0.459866019 container init c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:36:00 np0005592157 podman[90889]: 2026-01-22 13:36:00.11173548 +0000 UTC m=+0.467535037 container start c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:00 np0005592157 podman[90889]: 2026-01-22 13:36:00.115742829 +0000 UTC m=+0.471542386 container attach c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:36:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 22 08:36:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]: {
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:        "osd_id": 0,
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:        "type": "bluestore"
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]:    }
Jan 22 08:36:00 np0005592157 intelligent_lovelace[90920]: }
Jan 22 08:36:01 np0005592157 systemd[1]: libpod-c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618.scope: Deactivated successfully.
Jan 22 08:36:01 np0005592157 podman[90889]: 2026-01-22 13:36:01.011232059 +0000 UTC m=+1.367031626 container died c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9a93d296eb42da59291ef76e66206b05489a760e79801621aa7a9f8f3116e2ed-merged.mount: Deactivated successfully.
Jan 22 08:36:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:01 np0005592157 podman[90889]: 2026-01-22 13:36:01.075447295 +0000 UTC m=+1.431246862 container remove c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:36:01 np0005592157 systemd[1]: libpod-conmon-c55b0fcb13325d64d60065814f4aa7fd8bec91f9395c24df9db3e96f0bd11618.scope: Deactivated successfully.
Jan 22 08:36:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:36:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:36:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:01 np0005592157 python3[90978]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:36:01 np0005592157 podman[90979]: 2026-01-22 13:36:01.296638654 +0000 UTC m=+0.058477487 container create a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:01 np0005592157 systemd[1]: Started libpod-conmon-a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe.scope.
Jan 22 08:36:01 np0005592157 podman[90979]: 2026-01-22 13:36:01.275895404 +0000 UTC m=+0.037734207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:36:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3db1c00ed1ae7c0e35ea250fae7f56c61bca99881709e54d0bb593b9237ccd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3db1c00ed1ae7c0e35ea250fae7f56c61bca99881709e54d0bb593b9237ccd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:01 np0005592157 podman[90979]: 2026-01-22 13:36:01.399641892 +0000 UTC m=+0.161480725 container init a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:01 np0005592157 podman[90979]: 2026-01-22 13:36:01.407377442 +0000 UTC m=+0.169216225 container start a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:01 np0005592157 podman[90979]: 2026-01-22 13:36:01.411993005 +0000 UTC m=+0.173831868 container attach a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 22 08:36:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265572544' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 08:36:02 np0005592157 blissful_hodgkin[90994]: 
Jan 22 08:36:02 np0005592157 blissful_hodgkin[90994]: {"fsid":"088fe176-0106-5401-803c-2da38b73b76a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":20,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":2,"osd_up_since":1769088883,"num_in_osds":3,"osd_in_since":1769088956,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":108},{"state_name":"active+clean","count":67},{"state_name":"peering","count":2}],"num_pgs":177,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56184832,"bytes_avail":14967812096,"bytes_total":15023996928,"unknown_pgs_ratio":0.61016947031021118,"inactive_pgs_ratio":0.011299435049295425},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-22T13:35:45.386856+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"7facd680-c6b6-4660-bb1b-17747351be11":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 22 08:36:02 np0005592157 systemd[1]: libpod-a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe.scope: Deactivated successfully.
Jan 22 08:36:02 np0005592157 podman[91019]: 2026-01-22 13:36:02.099035629 +0000 UTC m=+0.030263334 container died a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:36:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ac3db1c00ed1ae7c0e35ea250fae7f56c61bca99881709e54d0bb593b9237ccd-merged.mount: Deactivated successfully.
Jan 22 08:36:02 np0005592157 podman[91019]: 2026-01-22 13:36:02.146431192 +0000 UTC m=+0.077658887 container remove a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe (image=quay.io/ceph/ceph:v18, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 08:36:02 np0005592157 systemd[1]: libpod-conmon-a7456eb7d8ffccfdfa1684710328d6ab5d74fd0dbad478c9f72c7bb8f0a37ffe.scope: Deactivated successfully.
Jan 22 08:36:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:03 np0005592157 python3[91059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:36:03 np0005592157 podman[91060]: 2026-01-22 13:36:03.297475796 +0000 UTC m=+0.046229436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:36:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 22 08:36:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 22 08:36:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:03 np0005592157 podman[91060]: 2026-01-22 13:36:03.797490939 +0000 UTC m=+0.546244509 container create fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 08:36:04 np0005592157 systemd[1]: Started libpod-conmon-fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367.scope.
Jan 22 08:36:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da81938ce2f0148926391f09e55546067de4c0e832fb465f0da1d96f4c78f51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da81938ce2f0148926391f09e55546067de4c0e832fb465f0da1d96f4c78f51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:04 np0005592157 podman[91060]: 2026-01-22 13:36:04.724351398 +0000 UTC m=+1.473104978 container init fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:36:04 np0005592157 podman[91060]: 2026-01-22 13:36:04.734659542 +0000 UTC m=+1.483413142 container start fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:36:04 np0005592157 podman[91060]: 2026-01-22 13:36:04.938894785 +0000 UTC m=+1.687648365 container attach fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 08:36:04 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-2.tjdsdx 192.168.122.102:0/969992693; not ready for session (expect reconnect)
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459129720' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 08:36:05 np0005592157 gracious_perlman[91075]: 
Jan 22 08:36:05 np0005592157 systemd[1]: libpod-fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367.scope: Deactivated successfully.
Jan 22 08:36:05 np0005592157 gracious_perlman[91075]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502921113","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Jan 22 08:36:05 np0005592157 podman[91100]: 2026-01-22 13:36:05.334507215 +0000 UTC m=+0.028736776 container died fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.tjdsdx started
Jan 22 08:36:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5da81938ce2f0148926391f09e55546067de4c0e832fb465f0da1d96f4c78f51-merged.mount: Deactivated successfully.
Jan 22 08:36:05 np0005592157 podman[91100]: 2026-01-22 13:36:05.466339681 +0000 UTC m=+0.160569222 container remove fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367 (image=quay.io/ceph/ceph:v18, name=gracious_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:05 np0005592157 systemd[1]: libpod-conmon-fa77b377f00a60d5a54a6127d231f75d76073263a7136d13e486d42875025367.scope: Deactivated successfully.
Jan 22 08:36:05 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-2.tjdsdx 192.168.122.102:0/969992693; not ready for session (expect reconnect)
Jan 22 08:36:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 22 08:36:06 np0005592157 python3[91140]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:36:06 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Jan 22 08:36:06 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Jan 22 08:36:06 np0005592157 podman[91141]: 2026-01-22 13:36:06.549471088 +0000 UTC m=+0.036267622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:36:06 np0005592157 podman[91141]: 2026-01-22 13:36:06.723066859 +0000 UTC m=+0.209863403 container create 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:36:06 np0005592157 systemd[1]: Started libpod-conmon-973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff.scope.
Jan 22 08:36:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a782a8fa0fe8217ba8d941556f485ec5b5226150c94b3547e11e2a6255215f44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a782a8fa0fe8217ba8d941556f485ec5b5226150c94b3547e11e2a6255215f44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:06 np0005592157 podman[91141]: 2026-01-22 13:36:06.841805383 +0000 UTC m=+0.328601907 container init 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:36:06 np0005592157 podman[91141]: 2026-01-22 13:36:06.847290398 +0000 UTC m=+0.334086902 container start 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:36:06 np0005592157 podman[91141]: 2026-01-22 13:36:06.85065717 +0000 UTC m=+0.337453694 container attach 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:06 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-2.tjdsdx 192.168.122.102:0/969992693; not ready for session (expect reconnect)
Jan 22 08:36:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647988089' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:07 np0005592157 clever_bartik[91157]: mimic
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 systemd[1]: libpod-973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff.scope: Deactivated successfully.
Jan 22 08:36:07 np0005592157 podman[91141]: 2026-01-22 13:36:07.416680024 +0000 UTC m=+0.903476558 container died 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:36:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a782a8fa0fe8217ba8d941556f485ec5b5226150c94b3547e11e2a6255215f44-merged.mount: Deactivated successfully.
Jan 22 08:36:07 np0005592157 podman[91141]: 2026-01-22 13:36:07.473885978 +0000 UTC m=+0.960682492 container remove 973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff (image=quay.io/ceph/ceph:v18, name=clever_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e42 e42: 3 total, 2 up, 3 in
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1a( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946269035s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.933746338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1a( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946183205s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.933746338s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.943940163s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.931541443s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1c( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946052551s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.933753967s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.943851471s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.931541443s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1c( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945994377s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.933753967s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945994377s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.933799744s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945956230s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.933799744s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945881844s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.933799744s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945851326s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.933799744s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.11( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945959091s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934013367s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.11( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945932388s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934013367s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946188927s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934425354s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946156502s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934425354s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.947222710s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935523987s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.947195053s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935523987s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948935509s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937393188s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948908806s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937393188s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.9( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946179390s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934715271s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.9( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946146965s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934715271s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946855545s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935508728s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946829796s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935508728s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946108818s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934844971s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946082115s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934844971s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 systemd[1]: libpod-conmon-973d80ab13e10f51784c0c276f40e0165d7fce06574be67f9aa31fc6c200e6ff.scope: Deactivated successfully.
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948463440s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937446594s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945968628s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934982300s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948431969s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937446594s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945940018s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934982300s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948287964s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937477112s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948257446s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937477112s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945615768s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.934982300s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945555687s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.934982300s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.948017120s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937500000s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.947961807s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937500000s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946156502s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935859680s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.946123123s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935859680s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945805550s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935699463s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.947611809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937553406s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.947581291s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937553406s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945775032s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935699463s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.951047897s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.940444946s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.950225830s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.940444946s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945695877s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935966492s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945544243s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935852051s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945640564s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935958862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945601463s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935958862s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945588112s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935966492s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945468903s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935852051s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.949625969s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.940345764s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.e( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945234299s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.935958862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.e( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.945199013s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.935958862s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.949589729s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.940345764s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.18( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950861931s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.941673279s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.18( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950837135s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.941673279s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950788498s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.941772461s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950769424s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.941772461s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950783730s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.941978455s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950951576s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.942153931s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950763702s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.941978455s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950887680s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.942153931s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.952102661s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.943527222s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.f( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.952059746s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.943527222s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950551987s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.942031860s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950513840s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.942031860s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.10( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950444221s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.942054749s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.10( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.950392723s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.942054749s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951379776s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.943176270s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951351166s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.943176270s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.16( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951263428s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.943191528s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951388359s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.943344116s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.16( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951227188s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.943191528s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[5.1b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951368332s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.943344116s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951436043s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 107.943519592s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=11.951416016s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.943519592s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.945142746s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 108.937438965s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.945116997s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.937438965s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 2 up, 3 in
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.tjdsdx", "id": "compute-2.tjdsdx"} v 0) v1
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.tjdsdx", "id": "compute-2.tjdsdx"}]: dispatch
Jan 22 08:36:07 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.1e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.10( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Jan 22 08:36:07 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Jan 22 08:36:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.hzmatt started
Jan 22 08:36:07 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-1.hzmatt 192.168.122.101:0/3027715417; not ready for session (expect reconnect)
Jan 22 08:36:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 22 08:36:08 np0005592157 python3[91219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:36:08 np0005592157 podman[91220]: 2026-01-22 13:36:08.629479902 +0000 UTC m=+0.053340334 container create 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:08 np0005592157 systemd[1]: Started libpod-conmon-07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85.scope.
Jan 22 08:36:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946db5b0861136d6e47da897c6ccbcd984eb6ee745b64b27a5fad59df58eaa73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946db5b0861136d6e47da897c6ccbcd984eb6ee745b64b27a5fad59df58eaa73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:08 np0005592157 podman[91220]: 2026-01-22 13:36:08.602055932 +0000 UTC m=+0.025916404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:36:08 np0005592157 podman[91220]: 2026-01-22 13:36:08.707962989 +0000 UTC m=+0.131823421 container init 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 08:36:08 np0005592157 podman[91220]: 2026-01-22 13:36:08.714242422 +0000 UTC m=+0.138102824 container start 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:36:08 np0005592157 podman[91220]: 2026-01-22 13:36:08.719148492 +0000 UTC m=+0.143008904 container attach 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:36:08 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-1.hzmatt 192.168.122.101:0/3027715417; not ready for session (expect reconnect)
Jan 22 08:36:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 22 08:36:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502293407' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 22 08:36:09 np0005592157 epic_nash[91235]: 
Jan 22 08:36:09 np0005592157 epic_nash[91235]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Jan 22 08:36:09 np0005592157 systemd[1]: libpod-07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85.scope: Deactivated successfully.
Jan 22 08:36:09 np0005592157 podman[91220]: 2026-01-22 13:36:09.37046524 +0000 UTC m=+0.794325682 container died 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 38 peering, 139 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-946db5b0861136d6e47da897c6ccbcd984eb6ee745b64b27a5fad59df58eaa73-merged.mount: Deactivated successfully.
Jan 22 08:36:09 np0005592157 podman[91220]: 2026-01-22 13:36:09.429784098 +0000 UTC m=+0.853644480 container remove 07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85 (image=quay.io/ceph/ceph:v18, name=epic_nash, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:36:09 np0005592157 systemd[1]: libpod-conmon-07d6f99cf26336fcf859e263947a1408d85c2f5c439b2a71678efd0a18bf9f85.scope: Deactivated successfully.
Jan 22 08:36:09 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-1.hzmatt 192.168.122.101:0/3027715417; not ready for session (expect reconnect)
Jan 22 08:36:10 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mgr.compute-1.hzmatt 192.168.122.101:0/3027715417; not ready for session (expect reconnect)
Jan 22 08:36:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 e43: 3 total, 2 up, 3 in
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 43 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 2 up, 3 in
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx, compute-1.hzmatt
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.hzmatt", "id": "compute-1.hzmatt"} v 0) v1
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hzmatt", "id": "compute-1.hzmatt"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:11 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 22 08:36:11 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 22 08:36:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 08:36:13 np0005592157 ceph-mon[74359]: Deploying daemon osd.2 on compute-2
Jan 22 08:36:14 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 22 08:36:14 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 22 08:36:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 22 08:36:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 22 08:36:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:20 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 7facd680-c6b6-4660-bb1b-17747351be11 (Global Recovery Event) in 25 seconds
Jan 22 08:36:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 22 08:36:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 22 08:36:25 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 12 completed events
Jan 22 08:36:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:36:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 22 08:36:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 22 08:36:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:36:29 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Jan 22 08:36:29 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Jan 22 08:36:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:36:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 22 08:36:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:31 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Jan 22 08:36:31 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:32 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e44 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 22 08:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e45 e45: 3 total, 2 up, 3 in
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 2 up, 3 in
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.1b( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679436684s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127166748s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.1b( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679436684s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.530319214s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 131.978103638s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.12( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487301826s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.935150146s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.530319214s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.978103638s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.12( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487301826s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.935150146s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.15( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679365158s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127273560s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.15( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679365158s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127273560s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.486564636s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.934646606s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.486564636s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.934646606s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678981781s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127090454s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.10( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679028511s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127166748s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.10( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.679028511s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678981781s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127090454s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.d( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678959846s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127304077s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.d( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678959846s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127304077s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487215042s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.935623169s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.a( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678672791s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127105713s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487215042s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.935623169s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.a( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678672791s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127105713s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.474218369s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 131.922760010s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.474218369s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.922760010s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.0( empty local-lis/les=36/37 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487215996s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.935882568s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487974167s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.936660767s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487974167s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.936660767s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=10.492096901s) [] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 132.940841675s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.0( empty local-lis/les=36/37 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487215996s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.935882568s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=10.492096901s) [] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.940841675s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487766266s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.936660767s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487766266s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.936660767s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.c( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678061485s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 134.127029419s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.486676216s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.935653687s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.470788002s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 131.919799805s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[2.c( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.678061485s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127029419s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.486676216s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.935653687s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487236023s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.936294556s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.470788002s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.919799805s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.488192558s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.937286377s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.487236023s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.936294556s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.473524094s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 131.922698975s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.488192558s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.937286377s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.493031502s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.942230225s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=9.473524094s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.922698975s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.528753281s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 131.978073120s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.493031502s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.942230225s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.528753281s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.978073120s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.8( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.492999077s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.942398071s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.13( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.493552208s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.942993164s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.8( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.492999077s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.942398071s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[5.13( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.493552208s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.942993164s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.494113922s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 active pruub 131.943603516s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.528611183s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 131.978103638s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=9.494113922s) [] r=-1 lpr=45 pi=[36,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.943603516s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 45 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.528611183s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.978103638s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:33 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:34 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 8c670124-be92-421d-bef1-2bcee6138187 (Updating rgw.rgw deployment (+3 -> 3))
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:34 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 08:36:34 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 08:36:34 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:34 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 08:36:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:35 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Jan 22 08:36:35 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Jan 22 08:36:35 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:35 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:36 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:36 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 22 08:36:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 22 08:36:37 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:37 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:36:38 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 22 08:36:38 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 22 08:36:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:36:38 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 22 08:36:39 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:39 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e46 e46: 3 total, 2 up, 3 in
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 2 up, 3 in
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:40 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:40 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:40 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:41 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 08:36:41 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 08:36:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v156: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 08:36:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e47 e47: 3 total, 2 up, 3 in
Jan 22 08:36:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 46 pg[8.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:41 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 2 up, 3 in
Jan 22 08:36:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:42 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592157 ceph-mon[74359]: Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 08:36:42 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e48 e48: 3 total, 2 up, 3 in
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 2 up, 3 in
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:36:43 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 48 pg[8.0( empty local-lis/les=46/48 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v159: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/892178328; not ready for session (expect reconnect)
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:43 np0005592157 ceph-mgr[74655]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 08:36:43 np0005592157 podman[91418]: 2026-01-22 13:36:43.984105217 +0000 UTC m=+0.048726711 container create 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:44 np0005592157 systemd[1]: Started libpod-conmon-29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade.scope.
Jan 22 08:36:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:44.043806815 +0000 UTC m=+0.108428339 container init 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:44.054198619 +0000 UTC m=+0.118820103 container start 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:43.960693805 +0000 UTC m=+0.025315329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:44 np0005592157 trusting_curie[91434]: 167 167
Jan 22 08:36:44 np0005592157 systemd[1]: libpod-29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade.scope: Deactivated successfully.
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:44.062234965 +0000 UTC m=+0.126856459 container attach 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:44.062800599 +0000 UTC m=+0.127422113 container died 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 08:36:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a58a077933bcc3ce929fdc22defdf9b54fa249cf3902a7cc8434bd834e5937e6-merged.mount: Deactivated successfully.
Jan 22 08:36:44 np0005592157 podman[91418]: 2026-01-22 13:36:44.125610722 +0000 UTC m=+0.190232216 container remove 29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:36:44 np0005592157 systemd[1]: libpod-conmon-29cd2f7b8c202f2e341af28ea0382a9461cfbb17667cc0e2a39a7c6c1970fade.scope: Deactivated successfully.
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 08:36:44 np0005592157 systemd[1]: Reloading.
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.1b( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445956588s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.1b( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445919633s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.15( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445622683s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127273560s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.13( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445397496s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127090454s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.12( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.13( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445376873s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127090454s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.15( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445565939s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127273560s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.12( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.10( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445365667s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.10( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445351720s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.b( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.c( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445073962s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127029419s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.d( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445321679s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127304077s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.c( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445043921s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127029419s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.d( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445286870s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127304077s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.d( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.a( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.445019364s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127105713s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[2.a( empty local-lis/les=28/29 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=1.444992423s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 134.127105713s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.0( empty local-lis/les=36/37 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.0( empty local-lis/les=36/37 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49 pruub=0.258344471s) [2] r=-1 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.940841675s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49 pruub=0.258050084s) [2] r=-1 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.940841675s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.8( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.13( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.8( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 49 pg[5.13( empty local-lis/les=36/37 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=-1 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:44 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:44 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:44 np0005592157 systemd[1]: Reloading.
Jan 22 08:36:44 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:44 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:44 np0005592157 systemd[1]: Starting Ceph rgw.rgw.compute-0.iqhnfa for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:45 np0005592157 podman[91577]: 2026-01-22 13:36:44.983841473 +0000 UTC m=+0.049422598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:45 np0005592157 podman[91577]: 2026-01-22 13:36:45.119879895 +0000 UTC m=+0.185460930 container create cb8b20b0d859c7fa647d0e1e3b6c94d56b0bf7f5e9d512c6e9b15111149e686c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-0-iqhnfa, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 22 08:36:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ad35e684e8161cea5e51e7dc503e6507ce7955f037896782d9cdf029bfe7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ad35e684e8161cea5e51e7dc503e6507ce7955f037896782d9cdf029bfe7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ad35e684e8161cea5e51e7dc503e6507ce7955f037896782d9cdf029bfe7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ad35e684e8161cea5e51e7dc503e6507ce7955f037896782d9cdf029bfe7be/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.iqhnfa supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:45 np0005592157 podman[91577]: 2026-01-22 13:36:45.303795437 +0000 UTC m=+0.369376572 container init cb8b20b0d859c7fa647d0e1e3b6c94d56b0bf7f5e9d512c6e9b15111149e686c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-0-iqhnfa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:36:45 np0005592157 podman[91577]: 2026-01-22 13:36:45.314348905 +0000 UTC m=+0.379929950 container start cb8b20b0d859c7fa647d0e1e3b6c94d56b0bf7f5e9d512c6e9b15111149e686c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-0-iqhnfa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:45 np0005592157 bash[91577]: cb8b20b0d859c7fa647d0e1e3b6c94d56b0bf7f5e9d512c6e9b15111149e686c
Jan 22 08:36:45 np0005592157 systemd[1]: Started Ceph rgw.rgw.compute-0.iqhnfa for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:45 np0005592157 radosgw[91596]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:45 np0005592157 radosgw[91596]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 22 08:36:45 np0005592157 radosgw[91596]: framework: beast
Jan 22 08:36:45 np0005592157 radosgw[91596]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 22 08:36:45 np0005592157 radosgw[91596]: init_numa not setting numa affinity
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v161: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:45 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:36:45 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 22 08:36:45 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 8c670124-be92-421d-bef1-2bcee6138187 (Updating rgw.rgw deployment (+3 -> 3))
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 8c670124-be92-421d-bef1-2bcee6138187 (Updating rgw.rgw deployment (+3 -> 3)) in 11 seconds
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 2935bb08-1179-434e-9376-f1adb7db9351 (Updating mds.cephfs deployment (+3 -> 3))
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 08:36:45 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:36:46
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Some PGs (0.156425) are inactive; try again later
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 08:36:46 np0005592157 ceph-mgr[74655]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 08:36:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 22 08:36:47 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 13 completed events
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:36:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v163: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 08:36:47 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Jan 22 08:36:47 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 08:36:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:48 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event a1eb61ba-aed4-4b93-b164-feedd46def74 (Global Recovery Event) in 6 seconds
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v166: 180 pgs: 2 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:36:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 22 08:36:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:50 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 08:36:50 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:50 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 22 08:36:50 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 22 08:36:50 np0005592157 podman[91820]: 2026-01-22 13:36:50.809490655 +0000 UTC m=+0.103373936 container create 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:36:50 np0005592157 podman[91820]: 2026-01-22 13:36:50.744767654 +0000 UTC m=+0.038650995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:50 np0005592157 systemd[1]: Started libpod-conmon-56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b.scope.
Jan 22 08:36:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:50 np0005592157 podman[91820]: 2026-01-22 13:36:50.969866162 +0000 UTC m=+0.263749423 container init 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:36:50 np0005592157 podman[91820]: 2026-01-22 13:36:50.97714396 +0000 UTC m=+0.271027211 container start 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:50 np0005592157 gallant_lamport[91836]: 167 167
Jan 22 08:36:50 np0005592157 systemd[1]: libpod-56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b.scope: Deactivated successfully.
Jan 22 08:36:51 np0005592157 podman[91820]: 2026-01-22 13:36:51.161243466 +0000 UTC m=+0.455126737 container attach 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:51 np0005592157 podman[91820]: 2026-01-22 13:36:51.16183653 +0000 UTC m=+0.455719781 container died 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e3 new map
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zycvef{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:boot
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] as mds.0
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 08:36:51 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.zycvef v2:192.168.122.102:6804/2301191554; not ready for session (expect reconnect)
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zycvef"} v 0) v1
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zycvef"}]: dispatch
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e3 all = 0
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e4 new map
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:51.171709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:creating}
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [0] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3fb02894ca2f13cf77457e790048458d4b03c36d7ffedadd3f2028c7167b8cac-merged.mount: Deactivated successfully.
Jan 22 08:36:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v168: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 345 B/s wr, 4 op/s
Jan 22 08:36:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 08:36:51 np0005592157 podman[91820]: 2026-01-22 13:36:51.437762749 +0000 UTC m=+0.731646020 container remove 56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:36:51 np0005592157 systemd[1]: libpod-conmon-56e38a27f6741c3b2cfaffc78b89d745a26ad3e083a9866a7897d6c34ad0318b.scope: Deactivated successfully.
Jan 22 08:36:51 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 22 08:36:51 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 22 08:36:51 np0005592157 systemd[1]: Reloading.
Jan 22 08:36:51 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:51 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:51 np0005592157 systemd[1]: Reloading.
Jan 22 08:36:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: Cluster is now healthy
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e5 new map
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 08:36:52 np0005592157 systemd[1]: Starting Ceph mds.cephfs.compute-0.zjixst for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active}
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:52 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [0] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:52 np0005592157 podman[91979]: 2026-01-22 13:36:52.745283043 +0000 UTC m=+0.075839673 container create 60e8f874e9b9387b050a0387b84628380fbb1f0b1c527c871d4a2c3979f34d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-0-zjixst, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:52 np0005592157 podman[91979]: 2026-01-22 13:36:52.706354092 +0000 UTC m=+0.036910762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b798969f4104202c4ef29aca41d2228f9e856e832ed7714e10a2dac1f918be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b798969f4104202c4ef29aca41d2228f9e856e832ed7714e10a2dac1f918be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b798969f4104202c4ef29aca41d2228f9e856e832ed7714e10a2dac1f918be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b798969f4104202c4ef29aca41d2228f9e856e832ed7714e10a2dac1f918be/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zjixst supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:53 np0005592157 podman[91979]: 2026-01-22 13:36:53.038613767 +0000 UTC m=+0.369170377 container init 60e8f874e9b9387b050a0387b84628380fbb1f0b1c527c871d4a2c3979f34d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-0-zjixst, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:53 np0005592157 podman[91979]: 2026-01-22 13:36:53.048369925 +0000 UTC m=+0.378926555 container start 60e8f874e9b9387b050a0387b84628380fbb1f0b1c527c871d4a2c3979f34d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-0-zjixst, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:53 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 14 completed events
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:36:53 np0005592157 bash[91979]: 60e8f874e9b9387b050a0387b84628380fbb1f0b1c527c871d4a2c3979f34d23
Jan 22 08:36:53 np0005592157 systemd[1]: Started Ceph mds.cephfs.compute-0.zjixst for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:53 np0005592157 ceph-mds[91998]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:53 np0005592157 ceph-mds[91998]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 22 08:36:53 np0005592157 ceph-mds[91998]: main not setting numa affinity
Jan 22 08:36:53 np0005592157 ceph-mds[91998]: pidfile_write: ignore empty --pid-file
Jan 22 08:36:53 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-0-zjixst[91994]: starting mds.cephfs.compute-0.zjixst at 
Jan 22 08:36:53 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Updating MDS map to version 5 from mon.0
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v170: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 362 B/s wr, 4 op/s
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e6 new map
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Updating MDS map to version 6 from mon.0
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 08:36:54 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Monitors have assigned me to become a standby.
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:boot
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zjixst"} v 0) v1
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zjixst"}]: dispatch
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e6 all = 0
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e7 new map
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:54 np0005592157 ceph-mon[74359]: Cluster is now healthy
Jan 22 08:36:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:36:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v172: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 5.0 KiB/s wr, 20 op/s
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.089812488367765e-06 of space, bias 1.0, pg target 0.0009269437465103294 quantized to 32 (current 1)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:36:55 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 08:36:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:36:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:36:56 np0005592157 radosgw[91596]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-0-iqhnfa[91592]: 2026-01-22T13:36:56.229+0000 7ff92a5bc940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592157 radosgw[91596]: framework: beast
Jan 22 08:36:56 np0005592157 radosgw[91596]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 08:36:56 np0005592157 radosgw[91596]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 08:36:56 np0005592157 radosgw[91596]: starting handler: beast
Jan 22 08:36:56 np0005592157 radosgw[91596]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 22 08:36:56 np0005592157 radosgw[91596]: mgrc service_daemon_register rgw.14331 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.iqhnfa,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9ef52632-dffc-43fe-ad78-aca5b0d3574d,zone_name=default,zonegroup_id=961906d1-4e51-43eb-bd43-c4a4ab081aea,zonegroup_name=default}
Jan 22 08:36:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:36:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:36:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v173: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.5 KiB/s wr, 16 op/s
Jan 22 08:36:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 22 08:36:58 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev 66b83795-c3f5-424e-9e9b-7d80ab125a03 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:36:58 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 8a1543cc-2316-46d9-8de2-c313c2e896b7 (Global Recovery Event) in 5 seconds
Jan 22 08:36:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 22 08:36:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 22 08:36:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:36:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 22 08:36:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.0 KiB/s wr, 14 op/s
Jan 22 08:36:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:36:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 22 08:36:59 np0005592157 python3[92586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:36:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 22 08:36:59 np0005592157 podman[92587]: 2026-01-22 13:36:59.851277504 +0000 UTC m=+0.085670653 container create f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:36:59 np0005592157 podman[92587]: 2026-01-22 13:36:59.806851479 +0000 UTC m=+0.041244648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:36:59 np0005592157 systemd[1]: Started libpod-conmon-f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa.scope.
Jan 22 08:36:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:36:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b560dbaac7b64a3d78c26d8e7b57d893fbb016c585addf6042df4065c2b23a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b560dbaac7b64a3d78c26d8e7b57d893fbb016c585addf6042df4065c2b23a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:00 np0005592157 podman[92587]: 2026-01-22 13:37:00.071638436 +0000 UTC m=+0.306031615 container init f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:37:00 np0005592157 podman[92587]: 2026-01-22 13:37:00.081731623 +0000 UTC m=+0.316124772 container start f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:37:00 np0005592157 podman[92587]: 2026-01-22 13:37:00.123280768 +0000 UTC m=+0.357673917 container attach f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e8 new map
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 08:37:00 np0005592157 ceph-mgr[74655]: mgr.server handle_open ignoring open from mds.cephfs.compute-1.ofmmzj v2:192.168.122.101:6804/2522830803; not ready for session (expect reconnect)
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:boot
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ofmmzj"} v 0) v1
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ofmmzj"}]: dispatch
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e8 all = 0
Jan 22 08:37:00 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev b68fea8a-5016-411a-8c93-7131471aa68e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:37:00 np0005592157 objective_vaughan[92602]: could not fetch user info: no user info saved
Jan 22 08:37:00 np0005592157 systemd[1]: libpod-f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa.scope: Deactivated successfully.
Jan 22 08:37:00 np0005592157 podman[92687]: 2026-01-22 13:37:00.73465676 +0000 UTC m=+0.038344158 container died f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:37:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 22 08:37:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:00 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 2935bb08-1179-434e-9376-f1adb7db9351 (Updating mds.cephfs deployment (+3 -> 3))
Jan 22 08:37:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 22 08:37:00 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 2935bb08-1179-434e-9376-f1adb7db9351 (Updating mds.cephfs deployment (+3 -> 3)) in 15 seconds
Jan 22 08:37:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e8b560dbaac7b64a3d78c26d8e7b57d893fbb016c585addf6042df4065c2b23a-merged.mount: Deactivated successfully.
Jan 22 08:37:00 np0005592157 podman[92687]: 2026-01-22 13:37:00.940269272 +0000 UTC m=+0.243956600 container remove f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa (image=quay.io/ceph/ceph:v18, name=objective_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:37:00 np0005592157 systemd[1]: libpod-conmon-f027b7fadddc921f0dac227e671a77ffb560d9eb7033f2e7c26b907b8fd248aa.scope: Deactivated successfully.
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 22 08:37:01 np0005592157 python3[92727]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 088fe176-0106-5401-803c-2da38b73b76a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 22 08:37:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v177: 181 pgs: 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 161 op/s
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:01 np0005592157 podman[92728]: 2026-01-22 13:37:01.348257996 +0000 UTC m=+0.033734545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 22 08:37:01 np0005592157 podman[92728]: 2026-01-22 13:37:01.491501034 +0000 UTC m=+0.176977493 container create ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:37:01 np0005592157 systemd[1]: Started libpod-conmon-ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405.scope.
Jan 22 08:37:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:37:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7500684669b8be3ce44ffe70d3d73420a6b1c79d8926549943a3c476bc1d5db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7500684669b8be3ce44ffe70d3d73420a6b1c79d8926549943a3c476bc1d5db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:01 np0005592157 podman[92728]: 2026-01-22 13:37:01.650807415 +0000 UTC m=+0.336283894 container init ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev cdf6db82-10d0-47b4-b3e0-a2f70daebac5 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 22 08:37:01 np0005592157 podman[92728]: 2026-01-22 13:37:01.656429343 +0000 UTC m=+0.341905802 container start ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 22 08:37:01 np0005592157 podman[92728]: 2026-01-22 13:37:01.727183221 +0000 UTC m=+0.412659700 container attach ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 22 08:37:01 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev a390da02-b0f4-4590-9b8f-3b87fd60ef9b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 22 08:37:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:02 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 58 pg[8.0( v 48'8 (0'0,48'8] local-lis/les=46/48 n=6 ec=46/46 lis/c=46/46 les/c/f=48/48/0 sis=58 pruub=13.094360352s) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 lcod 48'7 mlcod 48'7 active pruub 163.680282593s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:02 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 58 pg[8.0( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=46/46 lis/c=46/46 les/c/f=48/48/0 sis=58 pruub=13.094360352s) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 lcod 48'7 mlcod 0'0 unknown pruub 163.680282593s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:02 np0005592157 distracted_allen[92743]: {
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "user_id": "openstack",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "display_name": "openstack",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "email": "",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "suspended": 0,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "max_buckets": 1000,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "subusers": [],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "keys": [
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        {
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:            "user": "openstack",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:            "access_key": "GM190I5L7XMRQXTQEBBI",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:            "secret_key": "gEAHbV2cQg3mjF6zpVbDB51YOaCS7L7wet21aGVu"
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        }
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    ],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "swift_keys": [],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "caps": [],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "op_mask": "read, write, delete",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "default_placement": "",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "default_storage_class": "",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "placement_tags": [],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "bucket_quota": {
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "enabled": false,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "check_on_raw": false,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_size": -1,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_size_kb": 0,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_objects": -1
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    },
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "user_quota": {
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "enabled": false,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "check_on_raw": false,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_size": -1,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_size_kb": 0,
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:        "max_objects": -1
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    },
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "temp_url_keys": [],
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "type": "rgw",
Jan 22 08:37:02 np0005592157 distracted_allen[92743]:    "mfa_ids": []
Jan 22 08:37:02 np0005592157 distracted_allen[92743]: }
Jan 22 08:37:02 np0005592157 distracted_allen[92743]: 
Jan 22 08:37:02 np0005592157 systemd[1]: libpod-ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405.scope: Deactivated successfully.
Jan 22 08:37:02 np0005592157 podman[92728]: 2026-01-22 13:37:02.476296826 +0000 UTC m=+1.161773295 container died ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:37:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e7500684669b8be3ce44ffe70d3d73420a6b1c79d8926549943a3c476bc1d5db-merged.mount: Deactivated successfully.
Jan 22 08:37:02 np0005592157 podman[92728]: 2026-01-22 13:37:02.73071462 +0000 UTC m=+1.416191089 container remove ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405 (image=quay.io/ceph/ceph:v18, name=distracted_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:37:02 np0005592157 systemd[1]: libpod-conmon-ca98f0fa6ffd4b5c09803e4affb704a47a9db02cb63694971cedcdefb7f06405.scope: Deactivated successfully.
Jan 22 08:37:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:02 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 08:37:02 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 08:37:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 16 completed events
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v180: 243 pgs: 62 unknown, 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 3.4 KiB/s wr, 200 op/s
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] update: starting ev a617b78d-4e01-429f-85d2-4bebf7dc768b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev 66b83795-c3f5-424e-9e9b-7d80ab125a03 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 66b83795-c3f5-424e-9e9b-7d80ab125a03 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev b68fea8a-5016-411a-8c93-7131471aa68e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event b68fea8a-5016-411a-8c93-7131471aa68e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev a390da02-b0f4-4590-9b8f-3b87fd60ef9b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event a390da02-b0f4-4590-9b8f-3b87fd60ef9b (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev a617b78d-4e01-429f-85d2-4bebf7dc768b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event a617b78d-4e01-429f-85d2-4bebf7dc768b (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.16( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.17( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.18( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.11( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.12( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.13( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1c( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.19( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1f( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1a( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.4( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.6( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.7( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.b( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.c( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.d( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.a( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.9( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.e( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.f( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.3( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.10( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.15( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.14( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.8( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.5( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1( v 48'8 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1d( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1e( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.2( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1b( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=46/48 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[9.0( v 58'684 (0'0,58'684] local-lis/les=49/50 n=120 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=14.089797974s) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 58'683 mlcod 58'683 active pruub 166.016601562s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[9.0( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=14.089797974s) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 58'683 mlcod 0'0 unknown pruub 166.016601562s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.18( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.16( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.17( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.11( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.12( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.13( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.19( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1a( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.7( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.0( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=46/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 48'7 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.4( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.6( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.d( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.a( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.e( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.10( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.3( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.14( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.15( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.8( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.5( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1e( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.9( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1d( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.2( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 59 pg[8.1b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=46/46 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[46,58)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 22 08:37:03 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e9 new map
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:standby
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 08:37:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 22 08:37:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 22 08:37:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 22 08:37:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 08:37:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1c( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1f( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1a( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.3( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.4( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.9( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.15( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.14( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.11( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.2( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[11.0( v 58'2 (0'0,58'2] local-lis/les=53/54 n=2 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=11.400670052s) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 57'1 mlcod 57'1 active pruub 165.038131714s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.f( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.e( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.b( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.8( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.c( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.a( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.d( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.6( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.7( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.5( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1b( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1e( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1d( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.18( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.12( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.10( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.13( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.17( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.19( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.16( v 58'684 lc 0'0 (0'0,58'684] local-lis/les=49/50 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1f( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[11.0( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=11.400670052s) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 57'1 mlcod 0'0 unknown pruub 165.038131714s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.4( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.3( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1a( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1c( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.15( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.14( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.11( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.2( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.f( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.e( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.b( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.c( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.0( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 58'683 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.a( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.d( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.6( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.8( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.5( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1b( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1e( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.1d( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.12( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.13( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.17( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.19( v 58'684 (0'0,58'684] local-lis/les=59/60 n=3 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 60 pg[9.7( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=58'684 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 98 op/s
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Jan 22 08:37:05 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e10 new map
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:06 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Updating MDS map to version 10 from mon.0
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:standby
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.18( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1d( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1e( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=53/54 n=1 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.2( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=1 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.b( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.17( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.16( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.13( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.d( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.6( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.c( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.a( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.9( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.e( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.f( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.3( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.8( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.4( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.5( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.7( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.19( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1a( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1c( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1f( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.10( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.11( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.12( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.14( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.15( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1b( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=53/54 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.18( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=60/61 n=1 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1d( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.2( v 58'2 (0'0,58'2] local-lis/les=60/61 n=1 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.b( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.16( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.d( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.0( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 57'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.13( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.9( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.6( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.c( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.f( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.3( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.4( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.7( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.17( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.19( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.5( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1f( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.8( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.10( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.12( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.14( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.15( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1b( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.11( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 61 pg[11.1c( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 91 op/s
Jan 22 08:37:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.638810302 +0000 UTC m=+4.306337446 container create 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.600195029 +0000 UTC m=+4.267722213 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 08:37:07 np0005592157 systemd[1]: Started libpod-conmon-7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526.scope.
Jan 22 08:37:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.721479561 +0000 UTC m=+4.389006695 container init 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.726685538 +0000 UTC m=+4.394212642 container start 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 crazy_pike[93097]: 0 0
Jan 22 08:37:07 np0005592157 systemd[1]: libpod-7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526.scope: Deactivated successfully.
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.746444211 +0000 UTC m=+4.413971315 container attach 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.746778299 +0000 UTC m=+4.414305403 container died 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0665b5e2335b25e17560e3d1183c0d842eeea8f80aa616b595e17e3a90a6a770-merged.mount: Deactivated successfully.
Jan 22 08:37:07 np0005592157 podman[92979]: 2026-01-22 13:37:07.980802805 +0000 UTC m=+4.648329919 container remove 7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526 (image=quay.io/ceph/haproxy:2.3, name=crazy_pike)
Jan 22 08:37:07 np0005592157 systemd[1]: libpod-conmon-7327deb7f5c096b6df33e83329df27c59465dfddd4284d18160bf25e23f99526.scope: Deactivated successfully.
Jan 22 08:37:08 np0005592157 systemd[1]: Reloading.
Jan 22 08:37:08 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:08 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:08 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 2b23733e-11a2-4d54-a579-a7a934347a2f (Global Recovery Event) in 5 seconds
Jan 22 08:37:08 np0005592157 systemd[1]: Reloading.
Jan 22 08:37:08 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:08 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:08 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 22 08:37:08 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 22 08:37:08 np0005592157 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.erkqlp for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:37:09 np0005592157 podman[93245]: 2026-01-22 13:37:09.321988069 +0000 UTC m=+0.120209826 container create 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:37:09 np0005592157 podman[93245]: 2026-01-22 13:37:09.230075845 +0000 UTC m=+0.028297652 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 08:37:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8713ca0877ba9a83898a83be302ff45fbdbee846ef0e5d642257e10f0cf0908/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 peering, 31 unknown, 2 active+clean+laggy, 271 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 82 op/s
Jan 22 08:37:09 np0005592157 podman[93245]: 2026-01-22 13:37:09.431874014 +0000 UTC m=+0.230095781 container init 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:37:09 np0005592157 podman[93245]: 2026-01-22 13:37:09.437168933 +0000 UTC m=+0.235390690 container start 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:37:09 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp[93260]: [NOTICE] 021/133709 (2) : New worker #1 (4) forked
Jan 22 08:37:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000073s ======
Jan 22 08:37:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000073s
Jan 22 08:37:09 np0005592157 bash[93245]: 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761
Jan 22 08:37:09 np0005592157 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.erkqlp for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:37:09 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 22 08:37:09 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 22 08:37:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:37:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 22 08:37:11 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 22 08:37:11 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 22 08:37:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 08:37:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 08:37:12 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.14( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.15( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.13( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.1b( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.18( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.5( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.8( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[10.19( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.264389038s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.952316284s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929979324s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.617980957s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1d( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929945946s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.617980957s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929939270s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.617980957s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.264307976s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.952316284s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1d( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929883957s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.617980957s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.2( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263994217s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.952239990s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=60/61 n=1 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929751396s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618011475s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.2( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263967514s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.952239990s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=60/61 n=1 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929690361s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618011475s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.5( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263600349s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.952117920s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.5( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263570786s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.952117920s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.8( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263373375s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.952026367s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.8( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.263345718s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.952026367s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.16( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929642677s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618438721s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.17( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929985046s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618804932s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.16( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929593086s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618438721s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.14( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262978554s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951919556s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.15( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262989998s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951950073s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.15( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262908936s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951950073s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.14( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262866020s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951919556s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.13( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929334641s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618515015s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.10( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262688637s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951904297s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.17( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929927826s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618804932s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.10( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262663841s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951904297s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.13( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.929305077s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618515015s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.3( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262502670s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951919556s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.3( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262433052s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951919556s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262278557s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951904297s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262255669s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951904297s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928884506s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618637085s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928834915s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618637085s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.9( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262301445s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.952194214s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.9( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.262278557s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.952194214s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928371429s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618621826s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.e( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928345680s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618621826s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.d( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.261070251s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951431274s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.f( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928231239s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618667603s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.d( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.261022568s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951431274s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.a( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.261343002s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951675415s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260912895s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951416016s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.f( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928202629s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618667603s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260878563s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951416016s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.8( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928161621s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618759155s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.8( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.928106308s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618759155s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260727882s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951431274s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.b( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260698318s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951431274s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.4( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927984238s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618759155s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.3( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927900314s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618682861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.4( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927960396s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618759155s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.3( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927850723s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618682861s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.5( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927886963s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618881226s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.5( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927850723s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618881226s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.7( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927657127s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618804932s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.a( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260884285s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951675415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.6( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260328293s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951431274s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.7( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927620888s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618804932s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.6( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259942055s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951431274s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.19( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927180290s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618865967s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.19( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927147865s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618865967s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.19( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259565353s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951354980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.19( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259533882s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951354980s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259489059s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951370239s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1c( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927209854s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.619094849s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1c( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.927155495s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.619094849s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1f( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259382248s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951370239s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259213448s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951309204s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.1c( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.259148598s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951309204s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.12( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258979797s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951293945s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.12( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926667213s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.619018555s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.12( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258952141s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951293945s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926864624s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.618850708s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.12( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926637650s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.619018555s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.11( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258856773s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951263428s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.11( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258802414s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951263428s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.14( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926483154s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.619033813s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1a( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926403046s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.618850708s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.14( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926452637s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.619033813s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.17( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258634567s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951232910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.17( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258604050s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951232910s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1b( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926286697s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active pruub 171.619079590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.16( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258463860s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951263428s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[11.1b( v 58'2 (0'0,58'2] local-lis/les=60/61 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.926254272s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.619079590s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.16( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258412361s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951263428s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.18( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258223534s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951217651s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.18( v 48'8 (0'0,48'8] local-lis/les=58/59 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258150101s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951217651s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.4( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.260164261s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active pruub 175.951400757s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 62 pg[8.4( v 48'8 (0'0,48'8] local-lis/les=58/59 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=14.258074760s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.951400757s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 08:37:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:37:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 21 completed events
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:37:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 22 08:37:13 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 22 08:37:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 95 op/s
Jan 22 08:37:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:15.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:15 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 22 08:37:15 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:16 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 22 08:37:16 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 22 08:37:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 0 B/s wr, 83 op/s
Jan 22 08:37:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 22 08:37:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 22 08:37:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 22 08:37:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 22 08:37:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 22 08:37:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,61 pgs not in active + clean state
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=62/63 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=62/63 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.14( v 61'99 lc 57'86 (0'0,61'99] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=61'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.15( v 61'99 lc 57'78 (0'0,61'99] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=61'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=62/63 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 63 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=62/63 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 1 active+clean+scrubbing, 61 peering, 2 active+clean+laggy, 241 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 50 op/s
Jan 22 08:37:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:37:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 22 08:37:19 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 22 08:37:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 22 08:37:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:20 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 08:37:20 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 08:37:20 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 22 08:37:20 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 22 08:37:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 48 op/s; 0 B/s, 0 objects/s recovering
Jan 22 08:37:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 08:37:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 08:37:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 22 08:37:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:37:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:23.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:24 np0005592157 ceph-mon[74359]: Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 08:37:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event 235b4342-7f85-49a8-952f-30f8fd99586c (Global Recovery Event) in 5 seconds
Jan 22 08:37:25 np0005592157 systemd-logind[785]: New session 34 of user zuul.
Jan 22 08:37:25 np0005592157 systemd[1]: Started Session 34 of User zuul.
Jan 22 08:37:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 08:37:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:25.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:26 np0005592157 python3.9[93428]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 08:37:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:27 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 22 08:37:27 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 22 08:37:28 np0005592157 python3.9[93642]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:28 np0005592157 ceph-mon[74359]: Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:28 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 22 08:37:28 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 22 08:37:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:37:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:29.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:37:29 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 22 completed events
Jan 22 08:37:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 129 B/s, 0 objects/s recovering
Jan 22 08:37:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:37:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:37:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,29 pgs not in active + clean state
Jan 22 08:37:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 268 B/s, 0 objects/s recovering
Jan 22 08:37:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 22 08:37:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 22 08:37:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:37:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 252 B/s, 0 objects/s recovering
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:33.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:33.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 22 08:37:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 08:37:33 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 08:37:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 22 08:37:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 22 08:37:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 22 08:37:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 08:37:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.185004234s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 193.641479492s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189726830s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'698 lcod 62'697 mlcod 62'697 active pruub 193.646301270s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.184815407s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 193.641479492s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189580917s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'698 lcod 62'697 mlcod 0'0 unknown NOTIFY pruub 193.646301270s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189283371s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'704 lcod 62'703 mlcod 62'703 active pruub 193.646636963s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189174652s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'704 lcod 62'703 mlcod 0'0 unknown NOTIFY pruub 193.646636963s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189120293s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 193.646667480s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.189059258s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 193.646667480s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=59/60 n=3 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187717438s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=61'686 lcod 61'685 mlcod 61'685 active pruub 193.647003174s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=59/60 n=3 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187636375s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=61'686 lcod 61'685 mlcod 0'0 unknown NOTIFY pruub 193.647003174s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187912941s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=61'690 lcod 61'689 mlcod 61'689 active pruub 193.647384644s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187848091s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=61'690 lcod 61'689 mlcod 0'0 unknown NOTIFY pruub 193.647384644s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187411308s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 193.647003174s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187338829s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 193.647003174s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 65 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187358856s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 193.647171021s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:34 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 66 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65 pruub=10.187297821s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 193.647171021s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:35 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event afcc79ac-2d4c-46a7-af22-255f9e5f2641 (Global Recovery Event) in 5 seconds
Jan 22 08:37:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 174 B/s, 0 objects/s recovering
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 08:37:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:35.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 08:37:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 22 08:37:36 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 22 08:37:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'698 lcod 62'697 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'698 lcod 62'697 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'704 lcod 62'703 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'704 lcod 62'703 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=61'690 lcod 61'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=61'690 lcod 61'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=59/60 n=3 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=61'686 lcod 61'685 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=59/60 n=3 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=61'686 lcod 61'685 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 68 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 08:37:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:37.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:37 np0005592157 podman[93812]: 2026-01-22 13:37:37.79459102 +0000 UTC m=+3.391618204 container create df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc.)
Jan 22 08:37:37 np0005592157 podman[93812]: 2026-01-22 13:37:37.775209873 +0000 UTC m=+3.372237087 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 08:37:37 np0005592157 systemd[1]: Started libpod-conmon-df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb.scope.
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 22 08:37:37 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 22 08:37:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:37:37 np0005592157 podman[93812]: 2026-01-22 13:37:37.887986493 +0000 UTC m=+3.485013767 container init df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Jan 22 08:37:37 np0005592157 podman[93812]: 2026-01-22 13:37:37.898227126 +0000 UTC m=+3.495254320 container start df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, vcs-type=git)
Jan 22 08:37:37 np0005592157 crazy_herschel[93933]: 0 0
Jan 22 08:37:37 np0005592157 systemd[1]: libpod-df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb.scope: Deactivated successfully.
Jan 22 08:37:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 22 08:37:38 np0005592157 podman[93812]: 2026-01-22 13:37:38.195236069 +0000 UTC m=+3.792263263 container attach df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Jan 22 08:37:38 np0005592157 podman[93812]: 2026-01-22 13:37:38.196693124 +0000 UTC m=+3.793720328 container died df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, distribution-scope=public, release=1793, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:37:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 08:37:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 08:37:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.931156158s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 201.646575928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.931098938s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 201.646575928s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930762291s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'705 lcod 62'704 mlcod 62'704 active pruub 201.646865845s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930501938s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'705 lcod 62'704 mlcod 0'0 unknown NOTIFY pruub 201.646865845s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930733681s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 201.647216797s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930524826s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 201.647216797s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930337906s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 201.647323608s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=14.930164337s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 201.647323608s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=68/69 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'698 lcod 62'697 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=68/69 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'695 lcod 62'694 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-167e459f6debed614b8b8367a41f48292ea695370e2079f32d2e65f7eb286e46-merged.mount: Deactivated successfully.
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=68/69 n=3 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=61'686 lcod 61'685 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=61'690 lcod 61'689 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=68/69 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'704 lcod 62'703 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 69 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[59,68)/1 crt=62'690 lcod 62'689 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:38 np0005592157 podman[93812]: 2026-01-22 13:37:38.256479389 +0000 UTC m=+3.853506563 container remove df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb (image=quay.io/ceph/keepalived:2.2.4, name=crazy_herschel, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.expose-services=)
Jan 22 08:37:38 np0005592157 systemd[1]: libpod-conmon-df37e3a7585f0612b1a2d930d84ffa8405925cf650f122ad235f96c0359634fb.scope: Deactivated successfully.
Jan 22 08:37:38 np0005592157 systemd[1]: Reloading.
Jan 22 08:37:38 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:38 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:38 np0005592157 systemd[1]: Reloading.
Jan 22 08:37:38 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:38 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:38 np0005592157 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.hawera for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:37:39 np0005592157 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 08:37:39 np0005592157 systemd[1]: session-34.scope: Consumed 8.329s CPU time.
Jan 22 08:37:39 np0005592157 systemd-logind[785]: Session 34 logged out. Waiting for processes to exit.
Jan 22 08:37:39 np0005592157 systemd-logind[785]: Removed session 34.
Jan 22 08:37:39 np0005592157 podman[94076]: 2026-01-22 13:37:39.148669897 +0000 UTC m=+0.045794290 container create f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, release=1793, distribution-scope=public)
Jan 22 08:37:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c439818002c5c492b09cc9a5a9fda498a4de6d61c0d8d16cd6cbba84a543faa9/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:39 np0005592157 podman[94076]: 2026-01-22 13:37:39.218356185 +0000 UTC m=+0.115480588 container init f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vcs-type=git)
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 08:37:39 np0005592157 podman[94076]: 2026-01-22 13:37:39.124618124 +0000 UTC m=+0.021742557 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 08:37:39 np0005592157 podman[94076]: 2026-01-22 13:37:39.223497842 +0000 UTC m=+0.120622235 container start f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, version=2.2.4, release=1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 22 08:37:39 np0005592157 bash[94076]: f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 08:37:39 np0005592157 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.hawera for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.986012459s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 202.728668213s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.985899925s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 202.728668213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=68/69 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.985424042s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'698 lcod 62'697 mlcod 62'697 active pruub 202.728668213s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=68/69 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.985045433s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'698 lcod 62'697 mlcod 0'0 unknown NOTIFY pruub 202.728668213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=68/69 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.992745399s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'704 lcod 62'703 mlcod 62'703 active pruub 202.736587524s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=68/69 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.992661476s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'704 lcod 62'703 mlcod 0'0 unknown NOTIFY pruub 202.736587524s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.992294312s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 202.736404419s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.992202759s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 202.736404419s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'705 lcod 62'704 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'705 lcod 62'704 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991893768s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=61'690 lcod 61'689 mlcod 61'689 active pruub 202.736526489s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991836548s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=61'690 lcod 61'689 mlcod 0'0 unknown NOTIFY pruub 202.736526489s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=68/69 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991575241s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=61'686 lcod 61'685 mlcod 61'685 active pruub 202.736419678s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=68/69 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991539001s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=61'686 lcod 61'685 mlcod 0'0 unknown NOTIFY pruub 202.736419678s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991642952s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 202.736877441s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.991581917s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 202.736877441s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.990746498s) [2] async=[2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 202.736618042s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=68/69 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70 pruub=14.990629196s) [2] r=-1 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 202.736618042s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Starting VRRP child process, pid=4
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: Startup complete
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: (VI_0) Entering BACKUP STATE (init)
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:37:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:39 2026: VRRP_Script(check_backend) succeeded
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:37:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 08:37:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:37:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:39.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:39.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 08:37:40 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 71 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 lcod 62'689 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 71 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 71 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=70/71 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=62'705 lcod 62'704 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 71 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=70/71 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 lcod 62'694 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 23 completed events
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:37:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:41 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:37:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 22 08:37:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:41.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 08:37:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785765648s) [2] async=[2] r=-1 lpr=72 pi=[59,72)/1 crt=62'690 lcod 62'689 mlcod 62'689 active pruub 204.768493652s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785663605s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=62'690 lcod 62'689 mlcod 0'0 unknown NOTIFY pruub 204.768493652s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785571098s) [2] async=[2] r=-1 lpr=72 pi=[59,72)/1 crt=62'705 lcod 62'704 mlcod 62'704 active pruub 204.768615723s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785504341s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=62'705 lcod 62'704 mlcod 0'0 unknown NOTIFY pruub 204.768615723s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=70/71 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785366058s) [2] async=[2] r=-1 lpr=72 pi=[59,72)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 204.768707275s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=70/71 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785315514s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 204.768707275s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.785025597s) [2] async=[2] r=-1 lpr=72 pi=[59,72)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 204.768585205s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72 pruub=14.784979820s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 204.768585205s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Jan 22 08:37:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:42 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 22 08:37:42 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 22 08:37:42 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:42 2026: (VI_0) Entering MASTER STATE
Jan 22 08:37:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:37:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:37:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:37:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:43 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 22 08:37:43 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 22 08:37:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:37:44 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 22 08:37:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 08:37:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 22 08:37:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 08:37:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:45 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event f88be802-14b8-46b3-92f7-6ecefbb5eb19 (Global Recovery Event) in 5 seconds
Jan 22 08:37:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 22 08:37:45 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 22 08:37:45 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.984668732s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 209.646881104s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.984179497s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 209.646881104s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983693123s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=61'698 lcod 61'697 mlcod 61'697 active pruub 209.647033691s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983633041s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=62'697 lcod 62'696 mlcod 62'696 active pruub 209.647216797s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983435631s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=61'698 lcod 61'697 mlcod 0'0 unknown NOTIFY pruub 209.647033691s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983483315s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=62'697 lcod 62'696 mlcod 0'0 unknown NOTIFY pruub 209.647216797s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983396530s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 209.647491455s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 74 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=14.983280182s) [1] r=-1 lpr=74 pi=[59,74)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.647491455s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [progress INFO root] complete: finished ev cdf6db82-10d0-47b4-b3e0-a2f70daebac5 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [progress INFO root] Completed event cdf6db82-10d0-47b4-b3e0-a2f70daebac5 (Updating ingress.rgw.default deployment (+4 -> 4)) in 45 seconds
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:37:46
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'backups']
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 2/10 changes
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] Executing plan auto_2026-01-22_13:37:46
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] ceph osd pg-upmap-items 9.1 mappings [{'from': 0, 'to': 1}]
Jan 22 08:37:46 np0005592157 ceph-mgr[74655]: [balancer INFO root] ceph osd pg-upmap-items 9.12 mappings [{'from': 0, 'to': 1}]
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]} v 0) v1
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]} v 0) v1
Jan 22 08:37:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 crush map has features 3314933000854323200, adjusting msgr requires
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 75 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 75 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 75 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=62'697 lcod 62'696 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=62'697 lcod 62'696 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=61'698 lcod 61'697 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=61'698 lcod 61'697 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=13.952741623s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=62'703 lcod 62'702 mlcod 62'702 active pruub 209.647277832s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=0 lpr=75 pi=[59,75)/1 crt=62'695 lcod 62'694 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=13.952643394s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=62'703 lcod 62'702 mlcod 0'0 unknown NOTIFY pruub 209.647277832s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=13.951997757s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=61'698 lcod 61'697 mlcod 61'697 active pruub 209.647232056s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 75 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=13.951931953s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=61'698 lcod 61'697 mlcod 0'0 unknown NOTIFY pruub 209.647232056s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 685 B/s wr, 53 op/s; 301 B/s, 10 objects/s recovering
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 08:37:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:47.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:47 np0005592157 podman[94379]: 2026-01-22 13:37:47.573188662 +0000 UTC m=+0.084619007 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 22 08:37:47 np0005592157 podman[94379]: 2026-01-22 13:37:47.696269647 +0000 UTC m=+0.207699962 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=0 lpr=76 pi=[59,76)/1 crt=62'703 lcod 62'702 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=0 lpr=76 pi=[59,76)/1 crt=62'703 lcod 62'702 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=0 lpr=76 pi=[59,76)/1 crt=61'698 lcod 61'697 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=0 lpr=76 pi=[59,76)/1 crt=61'698 lcod 61'697 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=75/76 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[59,75)/1 crt=62'697 lcod 62'696 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=75/76 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[59,75)/1 crt=62'695 lcod 62'694 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=75/76 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[59,75)/1 crt=58'684 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 76 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=75/76 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[59,75)/1 crt=61'698 lcod 61'697 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:47 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera[94091]: Thu Jan 22 13:37:47 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 22 08:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 22 08:37:48 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592157 podman[94527]: 2026-01-22 13:37:48.47752505 +0000 UTC m=+0.281898662 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:37:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 22 08:37:48 np0005592157 podman[94527]: 2026-01-22 13:37:48.902549568 +0000 UTC m=+0.706923160 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:37:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 08:37:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 22 08:37:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:49.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.441671371s) [1] async=[1] r=-1 lpr=77 pi=[59,77)/1 crt=62'697 lcod 62'696 mlcod 62'696 active pruub 212.338333130s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.441578865s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=62'697 lcod 62'696 mlcod 0'0 unknown NOTIFY pruub 212.338333130s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.444233894s) [1] async=[1] r=-1 lpr=77 pi=[59,77)/1 crt=61'698 lcod 61'697 mlcod 61'697 active pruub 212.341308594s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.444014549s) [1] async=[1] r=-1 lpr=77 pi=[59,77)/1 crt=62'695 lcod 62'694 mlcod 62'694 active pruub 212.341293335s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.443901062s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=62'695 lcod 62'694 mlcod 0'0 unknown NOTIFY pruub 212.341293335s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=75/76 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.443789482s) [1] async=[1] r=-1 lpr=77 pi=[59,77)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 212.341308594s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=75/76 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.443681717s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 212.341308594s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77 pruub=13.443235397s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=61'698 lcod 61'697 mlcod 0'0 unknown NOTIFY pruub 212.341308594s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[59,76)/1 crt=61'698 lcod 61'697 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 77 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=76/77 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[59,76)/1 crt=62'703 lcod 62'702 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:37:50 np0005592157 ceph-mgr[74655]: [progress INFO root] Writing back 25 completed events
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 22 08:37:50 np0005592157 podman[94594]: 2026-01-22 13:37:50.631141909 +0000 UTC m=+0.067670959 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, release=1793, com.redhat.component=keepalived-container)
Jan 22 08:37:50 np0005592157 podman[94594]: 2026-01-22 13:37:50.653265005 +0000 UTC m=+0.089794005 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 08:37:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 2 objects/s recovering
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.934318542s) [1] async=[1] r=-1 lpr=78 pi=[59,78)/1 crt=61'698 lcod 61'697 mlcod 61'697 active pruub 214.903106689s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.934211731s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=61'698 lcod 61'697 mlcod 0'0 unknown NOTIFY pruub 214.903106689s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.978249550s) [1] async=[1] r=-1 lpr=78 pi=[59,78)/1 crt=62'703 lcod 62'702 mlcod 62'702 active pruub 214.947647095s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.677162170s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=61'693 lcod 61'692 mlcod 61'692 active pruub 209.647201538s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.677090645s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=61'693 lcod 61'692 mlcod 0'0 unknown NOTIFY pruub 209.647201538s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.977021217s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=62'703 lcod 62'702 mlcod 0'0 unknown NOTIFY pruub 214.947647095s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.676262856s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 209.647277832s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 78 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.676194191s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.647277832s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:51.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:37:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:51.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 22 08:37:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 08:37:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 22 08:37:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 08:37:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:53.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:55 np0005592157 systemd-logind[785]: New session 35 of user zuul.
Jan 22 08:37:55 np0005592157 systemd[1]: Started Session 35 of User zuul.
Jan 22 08:37:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 43 B/s, 4 objects/s recovering
Jan 22 08:37:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 08:37:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 22 08:37:56 np0005592157 python3.9[94913]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:37:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:56 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 22 08:37:57 np0005592157 python3.9[95087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 4 objects/s recovering
Jan 22 08:37:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 08:37:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 08:37:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.628947258s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 217.647018433s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.628856659s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.647018433s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=0 lpr=80 pi=[59,80)/1 crt=61'693 lcod 61'692 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=0 lpr=80 pi=[59,80)/1 crt=61'693 lcod 61'692 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=0 lpr=80 pi=[59,80)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=0 lpr=80 pi=[59,80)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.626944542s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=62'705 lcod 62'704 mlcod 62'704 active pruub 217.647781372s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 80 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.626787186s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=62'705 lcod 62'704 mlcod 0'0 unknown NOTIFY pruub 217.647781372s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 22 08:37:58 np0005592157 python3.9[95244]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Jan 22 08:37:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:37:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:59 np0005592157 python3.9[95398]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:38:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 08:38:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 22 08:38:01 np0005592157 python3.9[95552]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 08:38:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:01.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 22 08:38:01 np0005592157 python3.9[95705]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:01 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 22 08:38:02 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 22 08:38:02 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Jan 22 08:38:02 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:38:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 08:38:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:03.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:03 np0005592157 python3.9[95857]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:38:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 08:38:03 np0005592157 network[95874]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:38:03 np0005592157 network[95875]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:38:03 np0005592157 network[95876]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:38:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 81 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=80/81 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[59,80)/1 crt=61'693 lcod 61'692 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:03 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 81 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=80/81 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[59,80)/1 crt=58'684 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 08:38:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 22 08:38:04 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 82 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:04 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 82 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:04 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 82 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=62'705 lcod 62'704 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:04 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 82 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=62'705 lcod 62'704 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 22 08:38:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 08:38:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:05.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 08:38:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 22 08:38:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 08:38:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:07.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 83 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=82/83 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[59,82)/1 crt=62'705 lcod 62'704 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:08 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 83 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=82/83 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[59,82)/1 crt=58'684 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 22 08:38:08 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84 pruub=11.200597763s) [2] async=[2] r=-1 lpr=84 pi=[59,84)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 228.281372070s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:08 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84 pruub=11.200368881s) [2] r=-1 lpr=84 pi=[59,84)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.281372070s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:38:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 08:38:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:09.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:09 np0005592157 python3.9[96287]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:10 np0005592157 python3.9[96729]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 142 B/s wr, 20 op/s; 9/215 objects misplaced (4.186%); 30 B/s, 1 objects/s recovering
Jan 22 08:38:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:11.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:38:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:38:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 08:38:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 22 08:38:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=80/81 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85 pruub=8.100573540s) [2] async=[2] r=-1 lpr=85 pi=[59,85)/1 crt=61'693 lcod 61'692 mlcod 61'692 active pruub 228.281234741s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:11 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=80/81 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85 pruub=8.099834442s) [2] r=-1 lpr=85 pi=[59,85)/1 crt=61'693 lcod 61'692 mlcod 0'0 unknown NOTIFY pruub 228.281234741s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:12 np0005592157 python3.9[97340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:38:13 np0005592157 python3.9[97498]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:38:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 22 op/s; 9/215 objects misplaced (4.186%); 33 B/s, 1 objects/s recovering
Jan 22 08:38:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:13.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:38:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 22 08:38:14 np0005592157 python3.9[97583]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:38:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 9/213 objects misplaced (4.225%); 27 B/s, 0 objects/s recovering
Jan 22 08:38:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=82/83 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86 pruub=8.167543411s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=62'705 lcod 62'704 mlcod 62'704 active pruub 233.065658569s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 22 08:38:16 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=82/83 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86 pruub=8.166190147s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=62'705 lcod 62'704 mlcod 0'0 unknown NOTIFY pruub 233.065658569s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 22 08:38:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 22 08:38:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 22 08:38:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 3/213 objects misplaced (1.408%); 27 B/s, 1 objects/s recovering
Jan 22 08:38:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 22 08:38:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=82/83 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87 pruub=13.749476433s) [2] async=[2] r=-1 lpr=87 pi=[59,87)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 241.066726685s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:18 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=82/83 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87 pruub=13.749351501s) [2] r=-1 lpr=87 pi=[59,87)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.066726685s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 724f128f-a03e-4cd6-a289-e6c187b7c8dd does not exist
Jan 22 08:38:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 29b482d0-0a36-460e-b973-d9b3695ee0f3 does not exist
Jan 22 08:38:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e2cd3a7e-2f5a-4f20-8660-cc53367f90a5 does not exist
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 3/214 objects misplaced (1.402%); 0 B/s, 0 objects/s recovering
Jan 22 08:38:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:19.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:19 np0005592157 podman[97796]: 2026-01-22 13:38:19.720650743 +0000 UTC m=+0.026487139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:20 np0005592157 podman[97796]: 2026-01-22 13:38:20.72626712 +0000 UTC m=+1.032103506 container create c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:38:20 np0005592157 systemd[75969]: Created slice User Background Tasks Slice.
Jan 22 08:38:20 np0005592157 systemd[75969]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 08:38:20 np0005592157 systemd[75969]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:38:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 22 08:38:20 np0005592157 systemd[1]: Started libpod-conmon-c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30.scope.
Jan 22 08:38:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:20 np0005592157 podman[97796]: 2026-01-22 13:38:20.849918177 +0000 UTC m=+1.155754563 container init c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:20 np0005592157 podman[97796]: 2026-01-22 13:38:20.857279057 +0000 UTC m=+1.163115423 container start c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:20 np0005592157 podman[97796]: 2026-01-22 13:38:20.862638818 +0000 UTC m=+1.168475204 container attach c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:20 np0005592157 systemd[1]: libpod-c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30.scope: Deactivated successfully.
Jan 22 08:38:20 np0005592157 quirky_lamport[97813]: 167 167
Jan 22 08:38:20 np0005592157 conmon[97813]: conmon c66a6a6271b8e2cd9a08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30.scope/container/memory.events
Jan 22 08:38:20 np0005592157 podman[97796]: 2026-01-22 13:38:20.864947245 +0000 UTC m=+1.170783621 container died c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:38:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bd5f3199fd1abb30da2411b06b5827ead7ee8a982a226be133a5a174d1ae6c10-merged.mount: Deactivated successfully.
Jan 22 08:38:21 np0005592157 podman[97796]: 2026-01-22 13:38:21.022779009 +0000 UTC m=+1.328615425 container remove c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 22 08:38:21 np0005592157 systemd[1]: libpod-conmon-c66a6a6271b8e2cd9a080cb04c4a91ca74bbf0f40b9ea25f415c17ce42ee9e30.scope: Deactivated successfully.
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 22 08:38:21 np0005592157 podman[97837]: 2026-01-22 13:38:21.225972463 +0000 UTC m=+0.046472669 container create 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:38:21 np0005592157 systemd[1]: Started libpod-conmon-4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63.scope.
Jan 22 08:38:21 np0005592157 podman[97837]: 2026-01-22 13:38:21.203292818 +0000 UTC m=+0.023793024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:21 np0005592157 podman[97837]: 2026-01-22 13:38:21.330568524 +0000 UTC m=+0.151068750 container init 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:21 np0005592157 podman[97837]: 2026-01-22 13:38:21.336276713 +0000 UTC m=+0.156776919 container start 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:21 np0005592157 podman[97837]: 2026-01-22 13:38:21.365917149 +0000 UTC m=+0.186417355 container attach 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:38:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 08:38:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:21.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 89 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=11.245271683s) [1] r=-1 lpr=89 pi=[59,89)/1 crt=61'690 lcod 61'689 mlcod 61'689 active pruub 241.650329590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 89 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=11.245184898s) [1] r=-1 lpr=89 pi=[59,89)/1 crt=61'690 lcod 61'689 mlcod 0'0 unknown NOTIFY pruub 241.650329590s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 89 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=11.244165421s) [1] r=-1 lpr=89 pi=[59,89)/1 crt=62'714 lcod 62'713 mlcod 62'713 active pruub 241.650558472s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:21 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 89 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=11.244070053s) [1] r=-1 lpr=89 pi=[59,89)/1 crt=62'714 lcod 62'713 mlcod 0'0 unknown NOTIFY pruub 241.650558472s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Jan 22 08:38:22 np0005592157 loving_aryabhata[97853]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:38:22 np0005592157 loving_aryabhata[97853]: --> relative data size: 1.0
Jan 22 08:38:22 np0005592157 loving_aryabhata[97853]: --> All data devices are unavailable
Jan 22 08:38:22 np0005592157 systemd[1]: libpod-4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63.scope: Deactivated successfully.
Jan 22 08:38:22 np0005592157 podman[97837]: 2026-01-22 13:38:22.183574755 +0000 UTC m=+1.004074961 container died 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:38:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-11cc32a704910fc38e6b5e90a8de1d2e06d18136a080161e906efec2240fbdcc-merged.mount: Deactivated successfully.
Jan 22 08:38:22 np0005592157 podman[97837]: 2026-01-22 13:38:22.253648731 +0000 UTC m=+1.074148937 container remove 4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:22 np0005592157 systemd[1]: libpod-conmon-4481a8831934c2d3329faaef92b4adf41c3c53d97b64127de6fe132e1bf5fa63.scope: Deactivated successfully.
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 08:38:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 90 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=0 lpr=90 pi=[59,90)/1 crt=61'690 lcod 61'689 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 90 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=0 lpr=90 pi=[59,90)/1 crt=62'714 lcod 62'713 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 90 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=0 lpr=90 pi=[59,90)/1 crt=62'714 lcod 62'713 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 90 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=0 lpr=90 pi=[59,90)/1 crt=61'690 lcod 61'689 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.887850956 +0000 UTC m=+0.035176262 container create dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:22 np0005592157 systemd[1]: Started libpod-conmon-dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e.scope.
Jan 22 08:38:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.968542641 +0000 UTC m=+0.115867977 container init dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.873248698 +0000 UTC m=+0.020574024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.976512736 +0000 UTC m=+0.123838042 container start dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.980587376 +0000 UTC m=+0.127912682 container attach dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:22 np0005592157 wonderful_lalande[98035]: 167 167
Jan 22 08:38:22 np0005592157 systemd[1]: libpod-dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e.scope: Deactivated successfully.
Jan 22 08:38:22 np0005592157 podman[98019]: 2026-01-22 13:38:22.983248921 +0000 UTC m=+0.130574257 container died dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:38:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fdcf9fbceb117ec6ac385143f1c3864dcaa61a6f56d7069a827b07dbe4596ad5-merged.mount: Deactivated successfully.
Jan 22 08:38:23 np0005592157 podman[98019]: 2026-01-22 13:38:23.026331856 +0000 UTC m=+0.173657182 container remove dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:23 np0005592157 systemd[1]: libpod-conmon-dcbd339bbfced4aa2156da2bc18aa05b5d8fb5771069ecc2ad4b0a7e0fb9863e.scope: Deactivated successfully.
Jan 22 08:38:23 np0005592157 podman[98060]: 2026-01-22 13:38:23.202992611 +0000 UTC m=+0.055220453 container create bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:23 np0005592157 systemd[1]: Started libpod-conmon-bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92.scope.
Jan 22 08:38:23 np0005592157 podman[98060]: 2026-01-22 13:38:23.174192155 +0000 UTC m=+0.026420077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7645ed430178abf299881e62e8100f0af2538b5327c88481830de1382702fcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7645ed430178abf299881e62e8100f0af2538b5327c88481830de1382702fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7645ed430178abf299881e62e8100f0af2538b5327c88481830de1382702fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7645ed430178abf299881e62e8100f0af2538b5327c88481830de1382702fcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:23 np0005592157 podman[98060]: 2026-01-22 13:38:23.302526167 +0000 UTC m=+0.154754049 container init bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:38:23 np0005592157 podman[98060]: 2026-01-22 13:38:23.310610365 +0000 UTC m=+0.162838207 container start bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:23 np0005592157 podman[98060]: 2026-01-22 13:38:23.315775631 +0000 UTC m=+0.168003523 container attach bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 68 B/s, 2 objects/s recovering
Jan 22 08:38:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 22 08:38:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 08:38:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]: {
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:    "0": [
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:        {
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "devices": [
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "/dev/loop3"
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            ],
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "lv_name": "ceph_lv0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "lv_size": "7511998464",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "name": "ceph_lv0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "tags": {
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.cluster_name": "ceph",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.crush_device_class": "",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.encrypted": "0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.osd_id": "0",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.type": "block",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:                "ceph.vdo": "0"
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            },
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "type": "block",
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:            "vg_name": "ceph_vg0"
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:        }
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]:    ]
Jan 22 08:38:23 np0005592157 laughing_noyce[98077]: }
Jan 22 08:38:24 np0005592157 systemd[1]: libpod-bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92.scope: Deactivated successfully.
Jan 22 08:38:24 np0005592157 podman[98060]: 2026-01-22 13:38:24.027875464 +0000 UTC m=+0.880103326 container died bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f7645ed430178abf299881e62e8100f0af2538b5327c88481830de1382702fcd-merged.mount: Deactivated successfully.
Jan 22 08:38:24 np0005592157 podman[98060]: 2026-01-22 13:38:24.093801568 +0000 UTC m=+0.946029420 container remove bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:24 np0005592157 systemd[1]: libpod-conmon-bab36c53b892e61e15a2d25af109ad4d91db5453063bfb363b2eae056cc04c92.scope: Deactivated successfully.
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 08:38:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.744796813 +0000 UTC m=+0.045061994 container create be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:38:24 np0005592157 systemd[1]: Started libpod-conmon-be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322.scope.
Jan 22 08:38:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.724980978 +0000 UTC m=+0.025246209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.831024594 +0000 UTC m=+0.131289815 container init be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.837387529 +0000 UTC m=+0.137652710 container start be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.841352506 +0000 UTC m=+0.141617707 container attach be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:24 np0005592157 stupefied_elbakyan[98257]: 167 167
Jan 22 08:38:24 np0005592157 systemd[1]: libpod-be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322.scope: Deactivated successfully.
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.842407602 +0000 UTC m=+0.142672783 container died be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b8c15772a4585f3044e43a56f1bb7fd2043746459275afdcb5539c2cf544635d-merged.mount: Deactivated successfully.
Jan 22 08:38:24 np0005592157 podman[98241]: 2026-01-22 13:38:24.881781696 +0000 UTC m=+0.182046907 container remove be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elbakyan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:24 np0005592157 systemd[1]: libpod-conmon-be2019515a377cf34b2996bac365636e6b1b15dc51dd8f9432c6592424866322.scope: Deactivated successfully.
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 91 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=90/91 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[59,90)/1 crt=61'690 lcod 61'689 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 91 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=90/91 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[59,90)/1 crt=62'714 lcod 62'713 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:25 np0005592157 podman[98281]: 2026-01-22 13:38:25.041949957 +0000 UTC m=+0.037740375 container create d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:38:25 np0005592157 systemd[1]: Started libpod-conmon-d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5.scope.
Jan 22 08:38:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d2fa62b9c7ab0ef3512a1ef1cbcdfcad25449bb49d1c1e99c428cec7a0407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d2fa62b9c7ab0ef3512a1ef1cbcdfcad25449bb49d1c1e99c428cec7a0407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d2fa62b9c7ab0ef3512a1ef1cbcdfcad25449bb49d1c1e99c428cec7a0407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d2fa62b9c7ab0ef3512a1ef1cbcdfcad25449bb49d1c1e99c428cec7a0407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:25 np0005592157 podman[98281]: 2026-01-22 13:38:25.026492039 +0000 UTC m=+0.022282457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:25 np0005592157 podman[98281]: 2026-01-22 13:38:25.133601421 +0000 UTC m=+0.129391879 container init d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:38:25 np0005592157 podman[98281]: 2026-01-22 13:38:25.138847269 +0000 UTC m=+0.134637707 container start d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:25 np0005592157 podman[98281]: 2026-01-22 13:38:25.141770451 +0000 UTC m=+0.137560889 container attach d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=90/91 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=15.741130829s) [1] async=[1] r=-1 lpr=92 pi=[59,92)/1 crt=61'690 lcod 61'689 mlcod 61'689 active pruub 249.498977661s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=90/91 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=15.740345955s) [1] r=-1 lpr=92 pi=[59,92)/1 crt=61'690 lcod 61'689 mlcod 0'0 unknown NOTIFY pruub 249.498977661s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=90/91 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=15.740204811s) [1] async=[1] r=-1 lpr=92 pi=[59,92)/1 crt=62'714 lcod 62'713 mlcod 62'713 active pruub 249.499023438s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=90/91 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=15.739447594s) [1] r=-1 lpr=92 pi=[59,92)/1 crt=62'714 lcod 62'713 mlcod 0'0 unknown NOTIFY pruub 249.499023438s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 22 08:38:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 08:38:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:25.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:26 np0005592157 exciting_colden[98297]: {
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:        "osd_id": 0,
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:        "type": "bluestore"
Jan 22 08:38:26 np0005592157 exciting_colden[98297]:    }
Jan 22 08:38:26 np0005592157 exciting_colden[98297]: }
Jan 22 08:38:26 np0005592157 systemd[1]: libpod-d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5.scope: Deactivated successfully.
Jan 22 08:38:26 np0005592157 podman[98281]: 2026-01-22 13:38:26.068776974 +0000 UTC m=+1.064567392 container died d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 08:38:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f18d2fa62b9c7ab0ef3512a1ef1cbcdfcad25449bb49d1c1e99c428cec7a0407-merged.mount: Deactivated successfully.
Jan 22 08:38:26 np0005592157 podman[98281]: 2026-01-22 13:38:26.128120337 +0000 UTC m=+1.123910755 container remove d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:38:26 np0005592157 systemd[1]: libpod-conmon-d187fe623b2079ddc4501b519c3b1e89f0b3fa03495dcf9799adcafce7e4b6f5.scope: Deactivated successfully.
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b7738b3f-c773-49b8-9c05-2ae7bb2685f9 does not exist
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3d89b454-16d8-477e-9590-d21854ccc7a8 does not exist
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bb845383-191a-4e70-930f-906a08f00d67 does not exist
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:38:26 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.200324494 +0000 UTC m=+0.056624527 container create 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:38:27 np0005592157 systemd[1]: Started libpod-conmon-3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb.scope.
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.175555808 +0000 UTC m=+0.031855821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.298736373 +0000 UTC m=+0.155036406 container init 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.307787455 +0000 UTC m=+0.164087488 container start 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:27 np0005592157 dreamy_allen[98565]: 167 167
Jan 22 08:38:27 np0005592157 systemd[1]: libpod-3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb.scope: Deactivated successfully.
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.410075428 +0000 UTC m=+0.266375441 container attach 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:27 np0005592157 conmon[98565]: conmon 3b95d5d9b09139ac3624 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb.scope/container/memory.events
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.410852427 +0000 UTC m=+0.267152420 container died 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fa576fdc6c788b7aa1f9c6ef73049b484a727474878d0a93ec9f761b7f0868e5-merged.mount: Deactivated successfully.
Jan 22 08:38:27 np0005592157 podman[98549]: 2026-01-22 13:38:27.449268538 +0000 UTC m=+0.305568531 container remove 3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:27 np0005592157 systemd[1]: libpod-conmon-3b95d5d9b09139ac36246a6a40029d1e7eae24a647db6b2d7036fbce4a5cdceb.scope: Deactivated successfully.
Jan 22 08:38:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:27 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 08:38:27 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:38:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:27.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:27 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:38:27 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.156612123 +0000 UTC m=+0.052357913 container create 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:38:28 np0005592157 systemd[1]: Started libpod-conmon-1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9.scope.
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.131845617 +0000 UTC m=+0.027591487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.246429362 +0000 UTC m=+0.142175152 container init 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.25249695 +0000 UTC m=+0.148242740 container start 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.256384026 +0000 UTC m=+0.152129846 container attach 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:38:28 np0005592157 stupefied_mcnulty[98720]: 167 167
Jan 22 08:38:28 np0005592157 systemd[1]: libpod-1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9.scope: Deactivated successfully.
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.259320977 +0000 UTC m=+0.155066767 container died 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-219f195fdd11364e96204082b3627ebb68a6583c285ea6cdd65876bee49878b5-merged.mount: Deactivated successfully.
Jan 22 08:38:28 np0005592157 podman[98704]: 2026-01-22 13:38:28.294071088 +0000 UTC m=+0.189816878 container remove 1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:28 np0005592157 systemd[1]: libpod-conmon-1a01a9bc010971a1ec8537b9d454af6e0cadb98ea84e460ff6e3ff8200132dd9.scope: Deactivated successfully.
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 08:38:28 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:28 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 08:38:28 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 2 objects/s recovering
Jan 22 08:38:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 22 08:38:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 08:38:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 22 08:38:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:29.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.77374029 +0000 UTC m=+0.059302472 container create 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:38:29 np0005592157 systemd[1]: Started libpod-conmon-76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927.scope.
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.741993353 +0000 UTC m=+0.027555615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.855991834 +0000 UTC m=+0.141554016 container init 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.862463022 +0000 UTC m=+0.148025204 container start 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:38:29 np0005592157 quirky_tu[98873]: 167 167
Jan 22 08:38:29 np0005592157 systemd[1]: libpod-76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927.scope: Deactivated successfully.
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.866547472 +0000 UTC m=+0.152109664 container attach 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.868267554 +0000 UTC m=+0.153829736 container died 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dcc95e1b824b8b3eb34e2d9f18134c608c63961c00f3c6ea914bd3c2b85a022a-merged.mount: Deactivated successfully.
Jan 22 08:38:29 np0005592157 podman[98857]: 2026-01-22 13:38:29.912850116 +0000 UTC m=+0.198412298 container remove 76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:29 np0005592157 systemd[1]: libpod-conmon-76d9e2282dfc98a22e15fbbe0058b29e559bf4a69c9c42e4eeced9ca4d56d927.scope: Deactivated successfully.
Jan 22 08:38:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:30 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 22 08:38:30 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:30 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 22 08:38:31 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:31 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 22 08:38:31 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 22 08:38:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 2 objects/s recovering
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:38:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:31.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: Reconfiguring osd.0 (monmap changed)...
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: Reconfiguring daemon osd.0 on compute-0
Jan 22 08:38:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.050274929 +0000 UTC m=+0.022154953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.222525636 +0000 UTC m=+0.194405660 container create baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 22 08:38:32 np0005592157 systemd[1]: Started libpod-conmon-baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d.scope.
Jan 22 08:38:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.563162075 +0000 UTC m=+0.535042119 container init baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.576898861 +0000 UTC m=+0.548778895 container start baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:38:32 np0005592157 crazy_mahavira[99049]: 167 167
Jan 22 08:38:32 np0005592157 systemd[1]: libpod-baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d.scope: Deactivated successfully.
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.828257834 +0000 UTC m=+0.800137888 container attach baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.82890831 +0000 UTC m=+0.800788354 container died baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 22 08:38:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-56590ae607d620625e7f20fae9fd957511c3f22dc180e95030d641c0e06b14a2-merged.mount: Deactivated successfully.
Jan 22 08:38:32 np0005592157 podman[99027]: 2026-01-22 13:38:32.886002128 +0000 UTC m=+0.857882192 container remove baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:38:32 np0005592157 systemd[1]: libpod-conmon-baed0eaf659931cb142c00fe987d6f1aa49d269175d6893f3acf8bbdac6f059d.scope: Deactivated successfully.
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:38:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 22 08:38:33 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:33 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 08:38:33 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:33 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 08:38:33 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 08:38:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:38:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 22 08:38:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:38:34 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 22 08:38:34 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 22 08:38:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:35.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 22 08:38:36 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:36 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 22 08:38:36 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: Reconfiguring osd.1 (monmap changed)...
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: Reconfiguring daemon osd.1 on compute-1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 2 remapped+peering, 2 unknown, 2 active+clean+laggy, 299 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 08:38:37 np0005592157 ceph-mgr[74655]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 08:38:38 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 22 08:38:38 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 08:38:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s; 137 B/s, 5 objects/s recovering
Jan 22 08:38:39 np0005592157 podman[99274]: 2026-01-22 13:38:39.553515657 +0000 UTC m=+0.070183259 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:39.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:39 np0005592157 podman[99274]: 2026-01-22 13:38:39.688309657 +0000 UTC m=+0.204977219 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:38:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 22 08:38:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 22 08:38:40 np0005592157 podman[99433]: 2026-01-22 13:38:40.490011393 +0000 UTC m=+0.076664988 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:38:40 np0005592157 podman[99433]: 2026-01-22 13:38:40.531295603 +0000 UTC m=+0.117949128 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:38:40 np0005592157 podman[99499]: 2026-01-22 13:38:40.881047065 +0000 UTC m=+0.090631059 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc.)
Jan 22 08:38:40 np0005592157 podman[99499]: 2026-01-22 13:38:40.902302416 +0000 UTC m=+0.111886400 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, vendor=Red Hat, Inc., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, name=keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:38:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 22 08:38:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 22 08:38:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 1 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 300 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 4 objects/s recovering
Jan 22 08:38:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:41.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 510e1f2e-56ac-487c-9a4f-c60a64c30f11 does not exist
Jan 22 08:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1fa714c6-d388-426e-ba5c-79006f1554fc does not exist
Jan 22 08:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ae2831a9-61f0-4731-987d-14ee111a251a does not exist
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.713000991 +0000 UTC m=+0.040492982 container create b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:38:42 np0005592157 systemd[1]: Started libpod-conmon-b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197.scope.
Jan 22 08:38:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.698704931 +0000 UTC m=+0.026196952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.798450213 +0000 UTC m=+0.125942234 container init b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.810158759 +0000 UTC m=+0.137650800 container start b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.81468461 +0000 UTC m=+0.142176641 container attach b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:38:42 np0005592157 crazy_cannon[99706]: 167 167
Jan 22 08:38:42 np0005592157 systemd[1]: libpod-b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197.scope: Deactivated successfully.
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.818060103 +0000 UTC m=+0.145552104 container died b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:38:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b001e837fff3751f1c055580c2641130e93ef4933ee42b55bfd05aebbc7e57da-merged.mount: Deactivated successfully.
Jan 22 08:38:42 np0005592157 podman[99690]: 2026-01-22 13:38:42.870154338 +0000 UTC m=+0.197646329 container remove b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:42 np0005592157 systemd[1]: libpod-conmon-b2e737f94cff66a0b5395a8d6974e618c6add3b6cebfa74a0378811ae5418197.scope: Deactivated successfully.
Jan 22 08:38:43 np0005592157 podman[99730]: 2026-01-22 13:38:43.1202463 +0000 UTC m=+0.089538393 container create d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 22 08:38:43 np0005592157 systemd[1]: Started libpod-conmon-d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45.scope.
Jan 22 08:38:43 np0005592157 podman[99730]: 2026-01-22 13:38:43.089057037 +0000 UTC m=+0.058349230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:43 np0005592157 podman[99730]: 2026-01-22 13:38:43.217352017 +0000 UTC m=+0.186644180 container init d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:38:43 np0005592157 podman[99730]: 2026-01-22 13:38:43.231376281 +0000 UTC m=+0.200668404 container start d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:43 np0005592157 podman[99730]: 2026-01-22 13:38:43.236416624 +0000 UTC m=+0.205708747 container attach d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 123 B/s, 5 objects/s recovering
Jan 22 08:38:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 22 08:38:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 08:38:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:43.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:43.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:44 np0005592157 busy_ishizaka[99746]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:38:44 np0005592157 busy_ishizaka[99746]: --> relative data size: 1.0
Jan 22 08:38:44 np0005592157 busy_ishizaka[99746]: --> All data devices are unavailable
Jan 22 08:38:44 np0005592157 systemd[1]: libpod-d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45.scope: Deactivated successfully.
Jan 22 08:38:44 np0005592157 podman[99764]: 2026-01-22 13:38:44.156107188 +0000 UTC m=+0.038104274 container died d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-13b18a4f599afe6534215ebc17db166821e4dc14991ceb9dbf530f6bc1ec4e67-merged.mount: Deactivated successfully.
Jan 22 08:38:44 np0005592157 podman[99764]: 2026-01-22 13:38:44.228419458 +0000 UTC m=+0.110416524 container remove d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ishizaka, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:38:44 np0005592157 systemd[1]: libpod-conmon-d421e4ce00c9c77e64e5cb16ba2ff77013e168f4f39ac3629c60d7e5f7e81b45.scope: Deactivated successfully.
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 08:38:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 22 08:38:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 101 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101 pruub=12.619853020s) [1] r=-1 lpr=101 pi=[59,101)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 265.651153564s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 101 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101 pruub=12.619548798s) [1] r=-1 lpr=101 pi=[59,101)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.651153564s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:45 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 22 08:38:45 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.117451222 +0000 UTC m=+0.074602217 container create eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:45 np0005592157 systemd[1]: Started libpod-conmon-eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2.scope.
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.088874192 +0000 UTC m=+0.046025267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.214493007 +0000 UTC m=+0.171644053 container init eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.222651157 +0000 UTC m=+0.179802162 container start eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.226985323 +0000 UTC m=+0.184136338 container attach eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:38:45 np0005592157 cranky_wilbur[99940]: 167 167
Jan 22 08:38:45 np0005592157 systemd[1]: libpod-eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2.scope: Deactivated successfully.
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.233465112 +0000 UTC m=+0.190616107 container died eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:38:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-40aee9b1df61864cae6927656213a01e804a5004e5d7a31a840e3102f2e5133a-merged.mount: Deactivated successfully.
Jan 22 08:38:45 np0005592157 podman[99924]: 2026-01-22 13:38:45.286790307 +0000 UTC m=+0.243941282 container remove eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:45 np0005592157 systemd[1]: libpod-conmon-eb70f04ac476c1fbada23dcb203bdf366f8ac16071b6f95a523434df59b362b2.scope: Deactivated successfully.
Jan 22 08:38:45 np0005592157 podman[99965]: 2026-01-22 13:38:45.488443894 +0000 UTC m=+0.050387565 container create 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 22 08:38:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 97 B/s, 4 objects/s recovering
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592157 systemd[1]: Started libpod-conmon-27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64.scope.
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 08:38:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 22 08:38:45 np0005592157 podman[99965]: 2026-01-22 13:38:45.467234615 +0000 UTC m=+0.029178286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:45 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 102 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=0 lpr=102 pi=[59,102)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:45 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 102 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=0 lpr=102 pi=[59,102)/1 crt=58'684 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edffd23cc96d53f081ef03289aec17e42c05816c2b977e368c2ccd3b02a112e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edffd23cc96d53f081ef03289aec17e42c05816c2b977e368c2ccd3b02a112e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edffd23cc96d53f081ef03289aec17e42c05816c2b977e368c2ccd3b02a112e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edffd23cc96d53f081ef03289aec17e42c05816c2b977e368c2ccd3b02a112e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:45.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:45 np0005592157 podman[99965]: 2026-01-22 13:38:45.60224825 +0000 UTC m=+0.164191901 container init 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:38:45 np0005592157 podman[99965]: 2026-01-22 13:38:45.611296811 +0000 UTC m=+0.173240482 container start 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:45 np0005592157 podman[99965]: 2026-01-22 13:38:45.615192087 +0000 UTC m=+0.177135748 container attach 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:45.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:46 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 22 08:38:46 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]: {
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:    "0": [
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:        {
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "devices": [
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "/dev/loop3"
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            ],
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "lv_name": "ceph_lv0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "lv_size": "7511998464",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "name": "ceph_lv0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "tags": {
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.cluster_name": "ceph",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.crush_device_class": "",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.encrypted": "0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.osd_id": "0",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.type": "block",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:                "ceph.vdo": "0"
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            },
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "type": "block",
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:            "vg_name": "ceph_vg0"
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:        }
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]:    ]
Jan 22 08:38:46 np0005592157 condescending_heyrovsky[99983]: }
Jan 22 08:38:46 np0005592157 systemd[1]: libpod-27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64.scope: Deactivated successfully.
Jan 22 08:38:46 np0005592157 podman[99992]: 2026-01-22 13:38:46.414993085 +0000 UTC m=+0.033587383 container died 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:38:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9edffd23cc96d53f081ef03289aec17e42c05816c2b977e368c2ccd3b02a112e-merged.mount: Deactivated successfully.
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:38:46 np0005592157 podman[99992]: 2026-01-22 13:38:46.488970256 +0000 UTC m=+0.107564514 container remove 27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:46 np0005592157 systemd[1]: libpod-conmon-27e7a15b1e2137a9b71dd6d733639db3124803245316bc42d8099ca7f388ec64.scope: Deactivated successfully.
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 08:38:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 22 08:38:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 103 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=103 pruub=10.583200455s) [1] r=-1 lpr=103 pi=[59,103)/1 crt=62'701 lcod 62'700 mlcod 62'700 active pruub 265.651031494s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 103 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=103 pruub=10.583069801s) [1] r=-1 lpr=103 pi=[59,103)/1 crt=62'701 lcod 62'700 mlcod 0'0 unknown NOTIFY pruub 265.651031494s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 103 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=102/103 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[59,102)/1 crt=58'684 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.165118118 +0000 UTC m=+0.049794450 container create 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:38:47
Jan 22 08:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Jan 22 08:38:47 np0005592157 systemd[1]: Started libpod-conmon-197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f.scope.
Jan 22 08:38:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.140117366 +0000 UTC m=+0.024793708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.24243151 +0000 UTC m=+0.127107882 container init 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.254208059 +0000 UTC m=+0.138884401 container start 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.258216177 +0000 UTC m=+0.142892519 container attach 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:38:47 np0005592157 boring_tesla[100213]: 167 167
Jan 22 08:38:47 np0005592157 systemd[1]: libpod-197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f.scope: Deactivated successfully.
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.263997978 +0000 UTC m=+0.148674340 container died 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:38:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-979957c68271df7c48cf2add903c9de8e8b90e175834155df5aaedd4924d49ee-merged.mount: Deactivated successfully.
Jan 22 08:38:47 np0005592157 podman[100197]: 2026-01-22 13:38:47.31310126 +0000 UTC m=+0.197777612 container remove 197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:47 np0005592157 systemd[1]: libpod-conmon-197a7d27ad048cc1e08156f304c49433ccc3b4f8fd5691b8a767e7cf8efbb27f.scope: Deactivated successfully.
Jan 22 08:38:47 np0005592157 podman[100238]: 2026-01-22 13:38:47.507975021 +0000 UTC m=+0.060069192 container create 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:38:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 22 08:38:47 np0005592157 systemd[1]: Started libpod-conmon-7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0.scope.
Jan 22 08:38:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 22 08:38:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:38:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771ebc6defcda2034b378c8f11b691ecda5e931d71120595eac41a1f7c66876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771ebc6defcda2034b378c8f11b691ecda5e931d71120595eac41a1f7c66876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771ebc6defcda2034b378c8f11b691ecda5e931d71120595eac41a1f7c66876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771ebc6defcda2034b378c8f11b691ecda5e931d71120595eac41a1f7c66876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:47.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:47 np0005592157 podman[100238]: 2026-01-22 13:38:47.489084099 +0000 UTC m=+0.041178310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:47 np0005592157 podman[100238]: 2026-01-22 13:38:47.600451505 +0000 UTC m=+0.152545666 container init 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:38:47 np0005592157 podman[100238]: 2026-01-22 13:38:47.616336294 +0000 UTC m=+0.168430455 container start 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:38:47 np0005592157 podman[100238]: 2026-01-22 13:38:47.619695286 +0000 UTC m=+0.171789447 container attach 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:38:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 08:38:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 22 08:38:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=102/103 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104 pruub=14.878838539s) [1] async=[1] r=-1 lpr=104 pi=[59,104)/1 crt=58'684 lcod 0'0 mlcod 0'0 active pruub 271.080627441s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 104 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=0 lpr=104 pi=[59,104)/1 crt=62'701 lcod 62'700 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=102/103 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104 pruub=14.878752708s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=58'684 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 271.080627441s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:47 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 104 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=0 lpr=104 pi=[59,104)/1 crt=62'701 lcod 62'700 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 08:38:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]: {
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:        "osd_id": 0,
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:        "type": "bluestore"
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]:    }
Jan 22 08:38:48 np0005592157 hardcore_thompson[100254]: }
Jan 22 08:38:48 np0005592157 systemd[1]: libpod-7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0.scope: Deactivated successfully.
Jan 22 08:38:48 np0005592157 podman[100238]: 2026-01-22 13:38:48.460808046 +0000 UTC m=+1.012902217 container died 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4771ebc6defcda2034b378c8f11b691ecda5e931d71120595eac41a1f7c66876-merged.mount: Deactivated successfully.
Jan 22 08:38:48 np0005592157 podman[100238]: 2026-01-22 13:38:48.527803186 +0000 UTC m=+1.079897347 container remove 7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 08:38:48 np0005592157 systemd[1]: libpod-conmon-7e1fd928accd72ddf2a09c397cba2927c69ae46c7c7429664ce00dce5c2decc0.scope: Deactivated successfully.
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 05b6e646-d74e-435b-a7c5-d970461f5654 does not exist
Jan 22 08:38:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b020c560-f7e3-4840-8fd5-3ec8c20ba049 does not exist
Jan 22 08:38:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dd08dab1-3719-48e3-949d-22449e197f20 does not exist
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 22 08:38:48 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 105 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=104/105 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] async=[1] r=0 lpr=104 pi=[59,104)/1 crt=62'701 lcod 62'700 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 22 08:38:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 22 08:38:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:38:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 22 08:38:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 08:38:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 08:38:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 22 08:38:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=104/105 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106 pruub=13.139074326s) [1] async=[1] r=-1 lpr=106 pi=[59,106)/1 crt=62'701 lcod 62'700 mlcod 62'700 active pruub 273.218200684s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=104/105 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106 pruub=13.138939857s) [1] r=-1 lpr=106 pi=[59,106)/1 crt=62'701 lcod 62'700 mlcod 0'0 unknown NOTIFY pruub 273.218200684s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:38:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 08:38:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 22 08:38:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 08:38:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 22 08:38:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 08:38:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:53.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:54 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 22 08:38:54 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 08:38:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 22 08:38:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 08:38:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:55.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 22 08:38:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 08:38:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 22 08:38:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 22 08:38:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 22 08:38:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 08:38:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 08:38:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 22 08:38:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 08:38:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 08:38:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 22 08:38:59 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 22 08:38:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 08:38:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:38:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 22 08:38:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 08:38:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:38:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:38:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:38:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:38:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 08:39:01 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 22 08:39:01 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 08:39:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 22 08:39:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 08:39:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:39:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 22 08:39:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 08:39:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 22 08:39:04 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 22 08:39:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:05.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 08:39:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:07.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:08 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Jan 22 08:39:08 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Jan 22 08:39:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 remapped+peering, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:09.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 08:39:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 22 08:39:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 22 08:39:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 08:39:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:39:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:39:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:13.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 22 08:39:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 08:39:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 22 08:39:14 np0005592157 python3.9[100554]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:39:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:16 np0005592157 python3.9[100842]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:39:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 22 08:39:17 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 22 08:39:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 08:39:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:17 np0005592157 python3.9[100995]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 22 08:39:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:17.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 08:39:17 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 22 08:39:18 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 22 08:39:18 np0005592157 python3.9[101147]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 08:39:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Jan 22 08:39:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 22 08:39:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 08:39:19 np0005592157 python3.9[101299]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:39:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:19.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 08:39:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:21 np0005592157 python3.9[101452]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 22 08:39:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 08:39:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:21.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592157 python3.9[101605]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=120) [0] r=0 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 08:39:22 np0005592157 python3.9[101683]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[86,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:22 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[86,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 08:39:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:23.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 08:39:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 22 08:39:23 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 122 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=122) [0] r=0 lpr=122 pi=[92,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:24 np0005592157 python3.9[101836]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 22 08:39:24 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 22 08:39:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 08:39:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 08:39:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 22 08:39:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 08:39:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123) [0] r=0 lpr=123 pi=[86,123)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123) [0] r=0 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 123 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[92,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 123 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[92,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 22 08:39:25 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 22 08:39:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 22 08:39:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 08:39:25 np0005592157 python3.9[101991]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:39:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:25.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 22 08:39:26 np0005592157 python3.9[102144]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 124 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=123/124 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123) [0] r=0 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:27 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 124 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=124) [0] r=0 lpr=124 pi=[70,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 22 B/s, 1 objects/s recovering
Jan 22 08:39:27 np0005592157 python3.9[102348]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:39:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:27.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 125 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[70,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:27 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 125 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[70,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:27.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:28 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 22 08:39:28 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 22 08:39:28 np0005592157 python3.9[102500]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 08:39:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 22 08:39:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 08:39:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 22 08:39:29 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 luod=0'0 crt=61'690 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:29 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:29 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 22 08:39:29 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 22 08:39:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Jan 22 08:39:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:29.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:29 np0005592157 python3.9[102653]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:29.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 22 08:39:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 08:39:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 22 08:39:30 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127) [0] r=0 lpr=127 pi=[70,127)/1 luod=0'0 crt=61'686 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:30 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127) [0] r=0 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:30 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 127 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=126/127 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 22 08:39:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 08:39:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 22 08:39:31 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 128 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=127/128 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127) [0] r=0 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Jan 22 08:39:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:31.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:31 np0005592157 python3.9[102807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:31.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:32 np0005592157 python3.9[102959]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:33 np0005592157 python3.9[103037]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:33.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:33.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:33 np0005592157 python3.9[103190]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:34 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 22 08:39:34 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 22 08:39:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:34 np0005592157 python3.9[103268]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:35 np0005592157 python3.9[103420]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:35.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:36 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 22 08:39:36 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.379513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176379864, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7982, "num_deletes": 251, "total_data_size": 10450112, "memory_usage": 10614096, "flush_reason": "Manual Compaction"}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176469497, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8782896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 8118, "table_properties": {"data_size": 8752223, "index_size": 20126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 89867, "raw_average_key_size": 23, "raw_value_size": 8679347, "raw_average_value_size": 2304, "num_data_blocks": 880, "num_entries": 3766, "num_filter_entries": 3766, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088717, "oldest_key_time": 1769088717, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 89950 microseconds, and 34452 cpu microseconds.
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.469596) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8782896 bytes OK
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.469632) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.472068) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.472095) EVENT_LOG_v1 {"time_micros": 1769089176472090, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.472122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10414123, prev total WAL file size 10414123, number of live WAL files 2.
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.474832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8577KB) 13(53KB) 8(1944B)]
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176475080, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8839689, "oldest_snapshot_seqno": -1}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3578 keys, 8795113 bytes, temperature: kUnknown
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176540028, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8795113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8764941, "index_size": 20142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 87772, "raw_average_key_size": 24, "raw_value_size": 8693770, "raw_average_value_size": 2429, "num_data_blocks": 884, "num_entries": 3578, "num_filter_entries": 3578, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.540310) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8795113 bytes
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.541650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.0 rd, 135.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(8.4, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3870, records dropped: 292 output_compression: NoCompression
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.541681) EVENT_LOG_v1 {"time_micros": 1769089176541672, "job": 4, "event": "compaction_finished", "compaction_time_micros": 65021, "compaction_time_cpu_micros": 25493, "output_level": 6, "num_output_files": 1, "total_output_size": 8795113, "num_input_records": 3870, "num_output_records": 3578, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176543591, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176543685, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176543748, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 08:39:36 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:39:36.474541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:39:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 22 08:39:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 08:39:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:37.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:37 np0005592157 python3.9[103574]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592157 python3.9[103726]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:39:39 np0005592157 python3.9[103876]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 08:39:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 22 08:39:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 08:39:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 22 08:39:40 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 08:39:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 22 08:39:41 np0005592157 python3.9[104029]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 22 08:39:41 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 22 08:39:41 np0005592157 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592157 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:39:41 np0005592157 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:39:41 np0005592157 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592157 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:39:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 1 unknown, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 22 08:39:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:41.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:42 np0005592157 python3.9[104192]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 22 08:39:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 22 08:39:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 08:39:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:43.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 08:39:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 22 08:39:44 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 133 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=133) [0] r=0 lpr=133 pi=[77,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 22 08:39:45 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 134 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[77,134)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:45 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 134 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[77,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s; 164 B/s, 3 objects/s recovering
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 22 08:39:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:39:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 08:39:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 22 08:39:46 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=135) [0] r=0 lpr=135 pi=[99,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:47 np0005592157 python3.9[104346]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:39:47
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta']
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:39:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 16 op/s; 155 B/s, 3 objects/s recovering
Jan 22 08:39:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 22 08:39:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:47.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:47 np0005592157 python3.9[104551]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 22 08:39:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:39:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 08:39:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136) [0] r=0 lpr=136 pi=[77,136)/1 luod=0'0 crt=62'697 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136) [0] r=0 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[99,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:49 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[99,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:49 np0005592157 podman[104750]: 2026-01-22 13:39:49.878795353 +0000 UTC m=+0.073043909 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:39:50 np0005592157 podman[104750]: 2026-01-22 13:39:50.013414136 +0000 UTC m=+0.207662662 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:39:50 np0005592157 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 08:39:50 np0005592157 systemd[1]: session-35.scope: Consumed 1min 8.874s CPU time.
Jan 22 08:39:50 np0005592157 systemd-logind[785]: Session 35 logged out. Waiting for processes to exit.
Jan 22 08:39:50 np0005592157 systemd-logind[785]: Removed session 35.
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:39:50 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 137 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=136/137 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136) [0] r=0 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:50 np0005592157 podman[104906]: 2026-01-22 13:39:50.698447625 +0000 UTC m=+0.078862622 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:39:50 np0005592157 podman[104906]: 2026-01-22 13:39:50.712384118 +0000 UTC m=+0.092799085 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:39:50 np0005592157 podman[104970]: 2026-01-22 13:39:50.973098745 +0000 UTC m=+0.053914288 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, version=2.2.4, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:39:50 np0005592157 podman[104970]: 2026-01-22 13:39:50.995401694 +0000 UTC m=+0.076217227 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, version=2.2.4)
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 08:39:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 22 08:39:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138) [0] r=0 lpr=138 pi=[99,138)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:51 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138) [0] r=0 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:39:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:39:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:39:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7e80c5c5-9e55-4832-9c7b-47643b88f785 does not exist
Jan 22 08:39:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b8a2049a-366c-418b-b55f-385ca828c9b4 does not exist
Jan 22 08:39:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3d4ece2c-af10-4cb5-a32a-5dd5c61499f5 does not exist
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 22 08:39:52 np0005592157 ceph-osd[84809]: osd.0 pg_epoch: 139 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=138/139 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138) [0] r=0 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.234578175 +0000 UTC m=+0.060703185 container create 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:39:53 np0005592157 systemd[1]: Started libpod-conmon-1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086.scope.
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.215082575 +0000 UTC m=+0.041207575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.350892558 +0000 UTC m=+0.177017578 container init 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.363700803 +0000 UTC m=+0.189825813 container start 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.36804648 +0000 UTC m=+0.194171550 container attach 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 08:39:53 np0005592157 unruffled_shtern[105293]: 167 167
Jan 22 08:39:53 np0005592157 systemd[1]: libpod-1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086.scope: Deactivated successfully.
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.372368446 +0000 UTC m=+0.198493426 container died 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:39:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3e6207009a2b53ef94b3fd2b19ae1ad48286806c4667b67ae17b55ff77be67b9-merged.mount: Deactivated successfully.
Jan 22 08:39:53 np0005592157 podman[105277]: 2026-01-22 13:39:53.421615278 +0000 UTC m=+0.247740258 container remove 1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:39:53 np0005592157 systemd[1]: libpod-conmon-1255ba6a80eb2a10603c7105b51b0486f0f029a0cf1f10c29524115e8da04086.scope: Deactivated successfully.
Jan 22 08:39:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:53 np0005592157 podman[105317]: 2026-01-22 13:39:53.613830859 +0000 UTC m=+0.047220063 container create a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:39:53 np0005592157 systemd[1]: Started libpod-conmon-a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799.scope.
Jan 22 08:39:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:53.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:53 np0005592157 podman[105317]: 2026-01-22 13:39:53.591784307 +0000 UTC m=+0.025173591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:53 np0005592157 podman[105317]: 2026-01-22 13:39:53.717826649 +0000 UTC m=+0.151215873 container init a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 08:39:53 np0005592157 podman[105317]: 2026-01-22 13:39:53.726771989 +0000 UTC m=+0.160161223 container start a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:39:53 np0005592157 podman[105317]: 2026-01-22 13:39:53.731529656 +0000 UTC m=+0.164918900 container attach a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 08:39:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:54 np0005592157 vibrant_shirley[105334]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:39:54 np0005592157 vibrant_shirley[105334]: --> relative data size: 1.0
Jan 22 08:39:54 np0005592157 vibrant_shirley[105334]: --> All data devices are unavailable
Jan 22 08:39:54 np0005592157 systemd[1]: libpod-a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799.scope: Deactivated successfully.
Jan 22 08:39:54 np0005592157 podman[105317]: 2026-01-22 13:39:54.597984341 +0000 UTC m=+1.031373575 container died a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 08:39:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-12b5cb5679716ce2bb454192f1936ad7c1295e5ad7ce4f340815b7941e23f6e0-merged.mount: Deactivated successfully.
Jan 22 08:39:54 np0005592157 podman[105317]: 2026-01-22 13:39:54.675995471 +0000 UTC m=+1.109384685 container remove a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:39:54 np0005592157 systemd[1]: libpod-conmon-a4f08096a7b587c0bcde6270358a5920ef5b33d3d0b6cd3a26cdb74076d86799.scope: Deactivated successfully.
Jan 22 08:39:55 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 22 08:39:55 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.536750456 +0000 UTC m=+0.061585187 container create d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 08:39:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:55 np0005592157 systemd[1]: Started libpod-conmon-d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4.scope.
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.505976108 +0000 UTC m=+0.030810909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.629918959 +0000 UTC m=+0.154753720 container init d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.638595462 +0000 UTC m=+0.163430233 container start d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:39:55 np0005592157 stupefied_edison[105522]: 167 167
Jan 22 08:39:55 np0005592157 systemd[1]: libpod-d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4.scope: Deactivated successfully.
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.646274741 +0000 UTC m=+0.171109502 container attach d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.646736293 +0000 UTC m=+0.171571024 container died d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 08:39:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:55.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-62dffca547a476ab39fb542e6a11896738dc2a7c420a2a96f5f34322ba3c77ef-merged.mount: Deactivated successfully.
Jan 22 08:39:55 np0005592157 podman[105506]: 2026-01-22 13:39:55.698999769 +0000 UTC m=+0.223834520 container remove d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:39:55 np0005592157 systemd[1]: libpod-conmon-d7dfdc846ce79c366d543152aeff9dca3fc9154901f215a6f06eaf4292dc98b4.scope: Deactivated successfully.
Jan 22 08:39:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:55.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:55 np0005592157 podman[105546]: 2026-01-22 13:39:55.918565493 +0000 UTC m=+0.080702437 container create 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:39:55 np0005592157 systemd[1]: Started libpod-conmon-448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d.scope.
Jan 22 08:39:55 np0005592157 podman[105546]: 2026-01-22 13:39:55.884444593 +0000 UTC m=+0.046581597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c714cb0a8e1247e37334aa07ac164dd29bcf6814f37552f729698a4a4d9b1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c714cb0a8e1247e37334aa07ac164dd29bcf6814f37552f729698a4a4d9b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c714cb0a8e1247e37334aa07ac164dd29bcf6814f37552f729698a4a4d9b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:56 np0005592157 systemd-logind[785]: New session 36 of user zuul.
Jan 22 08:39:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c714cb0a8e1247e37334aa07ac164dd29bcf6814f37552f729698a4a4d9b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:56 np0005592157 systemd[1]: Started Session 36 of User zuul.
Jan 22 08:39:56 np0005592157 podman[105546]: 2026-01-22 13:39:56.033871101 +0000 UTC m=+0.196008045 container init 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:39:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 22 08:39:56 np0005592157 podman[105546]: 2026-01-22 13:39:56.044113103 +0000 UTC m=+0.206250017 container start 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 08:39:56 np0005592157 podman[105546]: 2026-01-22 13:39:56.048108081 +0000 UTC m=+0.210244995 container attach 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:39:56 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 22 08:39:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]: {
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:    "0": [
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:        {
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "devices": [
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "/dev/loop3"
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            ],
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "lv_name": "ceph_lv0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "lv_size": "7511998464",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "name": "ceph_lv0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "tags": {
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.cluster_name": "ceph",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.crush_device_class": "",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.encrypted": "0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.osd_id": "0",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.type": "block",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:                "ceph.vdo": "0"
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            },
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "type": "block",
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:            "vg_name": "ceph_vg0"
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:        }
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]:    ]
Jan 22 08:39:56 np0005592157 naughty_goodall[105562]: }
Jan 22 08:39:56 np0005592157 systemd[1]: libpod-448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d.scope: Deactivated successfully.
Jan 22 08:39:56 np0005592157 podman[105546]: 2026-01-22 13:39:56.86836699 +0000 UTC m=+1.030503934 container died 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:39:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-19c714cb0a8e1247e37334aa07ac164dd29bcf6814f37552f729698a4a4d9b1d-merged.mount: Deactivated successfully.
Jan 22 08:39:56 np0005592157 podman[105546]: 2026-01-22 13:39:56.952537702 +0000 UTC m=+1.114674616 container remove 448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 08:39:56 np0005592157 systemd[1]: libpod-conmon-448eae28b24aeb6226fa639d051617726080e80b61d24cbfaa28d1d25324da7d.scope: Deactivated successfully.
Jan 22 08:39:57 np0005592157 python3.9[105723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:39:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:57.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.702837517 +0000 UTC m=+0.063069383 container create f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:39:57 np0005592157 systemd[1]: Started libpod-conmon-f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd.scope.
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.679644716 +0000 UTC m=+0.039876582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.806189541 +0000 UTC m=+0.166421447 container init f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.818096654 +0000 UTC m=+0.178328530 container start f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.822452491 +0000 UTC m=+0.182684417 container attach f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 08:39:57 np0005592157 busy_payne[105926]: 167 167
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.826615644 +0000 UTC m=+0.186847480 container died f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:39:57 np0005592157 systemd[1]: libpod-f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd.scope: Deactivated successfully.
Jan 22 08:39:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7614a324c0e705a3334b16fdf894e6a2adb7370647e24c3ea8251bfc90c7bcd5-merged.mount: Deactivated successfully.
Jan 22 08:39:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:39:57 np0005592157 podman[105910]: 2026-01-22 13:39:57.869676403 +0000 UTC m=+0.229908229 container remove f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:39:57 np0005592157 systemd[1]: libpod-conmon-f19a7c09bfee6443d9405b6f402becb78e8bfec5a720f6c6c09f3f506205e9fd.scope: Deactivated successfully.
Jan 22 08:39:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 22 08:39:58 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 22 08:39:58 np0005592157 podman[106001]: 2026-01-22 13:39:58.086741806 +0000 UTC m=+0.059521686 container create 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:39:58 np0005592157 systemd[1]: Started libpod-conmon-75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705.scope.
Jan 22 08:39:58 np0005592157 podman[106001]: 2026-01-22 13:39:58.065155655 +0000 UTC m=+0.037935555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:39:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:39:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac0bb9c0cc0262ba4058fc458fd0f66fa3c5489360e70a1695cec95d9947bd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac0bb9c0cc0262ba4058fc458fd0f66fa3c5489360e70a1695cec95d9947bd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac0bb9c0cc0262ba4058fc458fd0f66fa3c5489360e70a1695cec95d9947bd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac0bb9c0cc0262ba4058fc458fd0f66fa3c5489360e70a1695cec95d9947bd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:39:58 np0005592157 podman[106001]: 2026-01-22 13:39:58.204266258 +0000 UTC m=+0.177046158 container init 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 08:39:58 np0005592157 podman[106001]: 2026-01-22 13:39:58.215423183 +0000 UTC m=+0.188203043 container start 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:39:58 np0005592157 podman[106001]: 2026-01-22 13:39:58.219207706 +0000 UTC m=+0.191987566 container attach 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:39:58 np0005592157 python3.9[106097]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:39:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]: {
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:        "osd_id": 0,
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:        "type": "bluestore"
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]:    }
Jan 22 08:39:59 np0005592157 sad_goldstine[106025]: }
Jan 22 08:39:59 np0005592157 systemd[1]: libpod-75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705.scope: Deactivated successfully.
Jan 22 08:39:59 np0005592157 podman[106001]: 2026-01-22 13:39:59.142676564 +0000 UTC m=+1.115456454 container died 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:39:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fac0bb9c0cc0262ba4058fc458fd0f66fa3c5489360e70a1695cec95d9947bd0-merged.mount: Deactivated successfully.
Jan 22 08:39:59 np0005592157 podman[106001]: 2026-01-22 13:39:59.221759141 +0000 UTC m=+1.194539001 container remove 75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:39:59 np0005592157 systemd[1]: libpod-conmon-75f0d1409da2dd136df09bee4e66b140a55b55acf398531b5817c16e69a7b705.scope: Deactivated successfully.
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eae4dcbe-fa54-42f4-8a98-314f71f65af2 does not exist
Jan 22 08:39:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 55f2acef-3d56-4368-8637-59d6fb019613 does not exist
Jan 22 08:39:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6548b191-a2ff-456c-9e98-298e1f1d502b does not exist
Jan 22 08:39:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:39:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:59 np0005592157 python3.9[106305]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:39:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:39:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:59.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592157 python3.9[106413]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:40:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 22 08:40:00 np0005592157 ceph-osd[84809]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 22 08:40:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:01.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:02 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:02 np0005592157 python3.9[106567]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:40:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:40:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:40:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592157 python3.9[106721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:40:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:40:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:40:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:06 np0005592157 python3.9[106875]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:07 np0005592157 python3.9[107027]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:40:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:07.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:08 np0005592157 python3.9[107228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:09 np0005592157 python3.9[107387]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:09.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:11 np0005592157 python3.9[107541]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:13 np0005592157 python3.9[107829]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:40:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:13.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:14 np0005592157 python3.9[107979]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:15 np0005592157 python3.9[108133]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:15.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:17 np0005592157 python3.9[108288]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:17.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:17.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:40:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:40:19 np0005592157 python3.9[108442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:20 np0005592157 python3.9[108596]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 08:40:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:21.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:21.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:22 np0005592157 systemd[1]: session-36.scope: Deactivated successfully.
Jan 22 08:40:22 np0005592157 systemd[1]: session-36.scope: Consumed 18.816s CPU time.
Jan 22 08:40:22 np0005592157 systemd-logind[785]: Session 36 logged out. Waiting for processes to exit.
Jan 22 08:40:22 np0005592157 systemd-logind[785]: Removed session 36.
Jan 22 08:40:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592157 systemd-logind[785]: New session 37 of user zuul.
Jan 22 08:40:27 np0005592157 systemd[1]: Started Session 37 of User zuul.
Jan 22 08:40:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:40:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:27.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:40:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:27.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:28 np0005592157 python3.9[108828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:28 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:29 np0005592157 python3.9[108983]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:29.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:40:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:40:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:31 np0005592157 python3.9[109176]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:31 np0005592157 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 08:40:31 np0005592157 systemd[1]: session-37.scope: Consumed 2.684s CPU time.
Jan 22 08:40:31 np0005592157 systemd-logind[785]: Session 37 logged out. Waiting for processes to exit.
Jan 22 08:40:31 np0005592157 systemd-logind[785]: Removed session 37.
Jan 22 08:40:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:31.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:31.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:33.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:33.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592157 systemd-logind[785]: New session 38 of user zuul.
Jan 22 08:40:37 np0005592157 systemd[1]: Started Session 38 of User zuul.
Jan 22 08:40:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:37.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:38 np0005592157 python3.9[109359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:39 np0005592157 python3.9[109514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:39.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:40:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:39.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:40:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:40 np0005592157 python3.9[109670]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:41 np0005592157 python3.9[109755]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:43.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:43 np0005592157 python3.9[109909]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592157 python3.9[110104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:45.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:46 np0005592157 python3.9[110257]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:40:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:47 np0005592157 python3.9[110422]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:40:47
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr']
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:40:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:47 np0005592157 python3.9[110501]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:47.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:48 np0005592157 python3.9[110703]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:48 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:49 np0005592157 python3.9[110781]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:49.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:50 np0005592157 python3.9[110934]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:50 np0005592157 python3.9[111086]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:51 np0005592157 python3.9[111238]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:51.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:52 np0005592157 python3.9[111391]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:53 np0005592157 python3.9[111543]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:53.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:55 np0005592157 python3.9[111698]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:56 np0005592157 python3.9[111852]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:57 np0005592157 python3.9[112004]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:40:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:57.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:40:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:57.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:40:58 np0005592157 python3.9[112157]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:59 np0005592157 python3.9[112310]: ansible-service_facts Invoked
Jan 22 08:40:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:40:59 np0005592157 network[112328]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:40:59 np0005592157 network[112329]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:40:59 np0005592157 network[112330]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:40:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:40:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:40:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:59.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:01 np0005592157 podman[112541]: 2026-01-22 13:41:01.088897466 +0000 UTC m=+0.074720488 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:41:01 np0005592157 podman[112541]: 2026-01-22 13:41:01.203306579 +0000 UTC m=+0.189129591 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:41:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:01.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:02 np0005592157 podman[112727]: 2026-01-22 13:41:02.002864349 +0000 UTC m=+0.094213742 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:41:02 np0005592157 podman[112727]: 2026-01-22 13:41:02.010768245 +0000 UTC m=+0.102117608 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:41:02 np0005592157 podman[112808]: 2026-01-22 13:41:02.225244226 +0000 UTC m=+0.050097536 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 08:41:02 np0005592157 podman[112808]: 2026-01-22 13:41:02.240197557 +0000 UTC m=+0.065050847 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, distribution-scope=public, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3419a6d0-a968-4491-803d-da7598d245d1 does not exist
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e0d722f4-270d-412f-96d9-20aac4db1479 does not exist
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dbfba338-3ba5-4949-bbcb-28c53613ea5a does not exist
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:41:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:03.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.043180526 +0000 UTC m=+0.057733336 container create a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:41:04 np0005592157 systemd[1]: Started libpod-conmon-a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a.scope.
Jan 22 08:41:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.010999096 +0000 UTC m=+0.025551926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.116033866 +0000 UTC m=+0.130586706 container init a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.124006055 +0000 UTC m=+0.138558875 container start a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.128373793 +0000 UTC m=+0.142926623 container attach a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 22 08:41:04 np0005592157 peaceful_euclid[113323]: 167 167
Jan 22 08:41:04 np0005592157 systemd[1]: libpod-a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a.scope: Deactivated successfully.
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.132979418 +0000 UTC m=+0.147532238 container died a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:41:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e574304281d37ede6502c5f73e58eb3535a5568889d251bd16ac562121116c79-merged.mount: Deactivated successfully.
Jan 22 08:41:04 np0005592157 podman[113294]: 2026-01-22 13:41:04.177198797 +0000 UTC m=+0.191751607 container remove a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:41:04 np0005592157 systemd[1]: libpod-conmon-a8ed40826cd23b70a8bf317a851659f4f3155a4f33c84c6bc8bc5dec3bd41e0a.scope: Deactivated successfully.
Jan 22 08:41:04 np0005592157 podman[113383]: 2026-01-22 13:41:04.325564524 +0000 UTC m=+0.043204555 container create 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 08:41:04 np0005592157 systemd[1]: Started libpod-conmon-1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6.scope.
Jan 22 08:41:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:04 np0005592157 podman[113383]: 2026-01-22 13:41:04.306647624 +0000 UTC m=+0.024287675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:04 np0005592157 podman[113383]: 2026-01-22 13:41:04.419497028 +0000 UTC m=+0.137137119 container init 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:41:04 np0005592157 podman[113383]: 2026-01-22 13:41:04.426732808 +0000 UTC m=+0.144372849 container start 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:41:04 np0005592157 podman[113383]: 2026-01-22 13:41:04.430765098 +0000 UTC m=+0.148405219 container attach 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 08:41:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592157 python3.9[113532]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:41:05 np0005592157 lucid_goldwasser[113400]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:41:05 np0005592157 lucid_goldwasser[113400]: --> relative data size: 1.0
Jan 22 08:41:05 np0005592157 lucid_goldwasser[113400]: --> All data devices are unavailable
Jan 22 08:41:05 np0005592157 systemd[1]: libpod-1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6.scope: Deactivated successfully.
Jan 22 08:41:05 np0005592157 podman[113383]: 2026-01-22 13:41:05.28827947 +0000 UTC m=+1.005919531 container died 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:41:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8da3559511f425de547b7d68579a51ff1c908862ae90b7dd9f7eac0e4ed5a670-merged.mount: Deactivated successfully.
Jan 22 08:41:05 np0005592157 podman[113383]: 2026-01-22 13:41:05.375787983 +0000 UTC m=+1.093428014 container remove 1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:41:05 np0005592157 systemd[1]: libpod-conmon-1010ae70292d5d3b9d9627b5ec882e4dd5afd534591bb8318ce0d9bc8983c8d6.scope: Deactivated successfully.
Jan 22 08:41:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:05.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.130294805 +0000 UTC m=+0.047474341 container create 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:41:06 np0005592157 systemd[1]: Started libpod-conmon-2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1.scope.
Jan 22 08:41:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.203576906 +0000 UTC m=+0.120756472 container init 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.11481792 +0000 UTC m=+0.031997476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.212007005 +0000 UTC m=+0.129186561 container start 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:41:06 np0005592157 magical_wescoff[113714]: 167 167
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.21542231 +0000 UTC m=+0.132601886 container attach 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:41:06 np0005592157 systemd[1]: libpod-2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1.scope: Deactivated successfully.
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.216699582 +0000 UTC m=+0.133879138 container died 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:41:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-232e1517111fd27ce6d1e54ee58c42b07ce29cc5067a0f90ef13cc4f3c9bedaa-merged.mount: Deactivated successfully.
Jan 22 08:41:06 np0005592157 podman[113697]: 2026-01-22 13:41:06.262341026 +0000 UTC m=+0.179520582 container remove 2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:41:06 np0005592157 systemd[1]: libpod-conmon-2a130c7af638c4870dc5c1da9f9f026fe49ca016b15dc8c6a053ad4ef84333a1.scope: Deactivated successfully.
Jan 22 08:41:06 np0005592157 podman[113739]: 2026-01-22 13:41:06.463519956 +0000 UTC m=+0.048938697 container create 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:41:06 np0005592157 systemd[1]: Started libpod-conmon-195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663.scope.
Jan 22 08:41:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056acb8b72ac10d18768efc24c8af75afa68849cf2ad0948d248217718b5e1c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056acb8b72ac10d18768efc24c8af75afa68849cf2ad0948d248217718b5e1c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056acb8b72ac10d18768efc24c8af75afa68849cf2ad0948d248217718b5e1c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056acb8b72ac10d18768efc24c8af75afa68849cf2ad0948d248217718b5e1c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:06 np0005592157 podman[113739]: 2026-01-22 13:41:06.524414859 +0000 UTC m=+0.109833690 container init 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:41:06 np0005592157 podman[113739]: 2026-01-22 13:41:06.440969276 +0000 UTC m=+0.026388067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:06 np0005592157 podman[113739]: 2026-01-22 13:41:06.537871664 +0000 UTC m=+0.123290415 container start 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:41:06 np0005592157 podman[113739]: 2026-01-22 13:41:06.551165234 +0000 UTC m=+0.136584025 container attach 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 08:41:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]: {
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:    "0": [
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:        {
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "devices": [
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "/dev/loop3"
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            ],
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "lv_name": "ceph_lv0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "lv_size": "7511998464",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "name": "ceph_lv0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "tags": {
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.cluster_name": "ceph",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.crush_device_class": "",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.encrypted": "0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.osd_id": "0",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.type": "block",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:                "ceph.vdo": "0"
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            },
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "type": "block",
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:            "vg_name": "ceph_vg0"
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:        }
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]:    ]
Jan 22 08:41:07 np0005592157 jolly_zhukovsky[113779]: }
Jan 22 08:41:07 np0005592157 systemd[1]: libpod-195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663.scope: Deactivated successfully.
Jan 22 08:41:07 np0005592157 podman[113739]: 2026-01-22 13:41:07.352684924 +0000 UTC m=+0.938103695 container died 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 22 08:41:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-056acb8b72ac10d18768efc24c8af75afa68849cf2ad0948d248217718b5e1c6-merged.mount: Deactivated successfully.
Jan 22 08:41:07 np0005592157 podman[113739]: 2026-01-22 13:41:07.42375013 +0000 UTC m=+1.009168871 container remove 195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:41:07 np0005592157 systemd[1]: libpod-conmon-195065b2da6e5ad39f4c18e17ef7c6481faf6158f23f5fdc6157a04ae0b2b663.scope: Deactivated successfully.
Jan 22 08:41:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:07.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:07 np0005592157 python3.9[114017]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.124211628 +0000 UTC m=+0.048682541 container create d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:41:08 np0005592157 systemd[1]: Started libpod-conmon-d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2.scope.
Jan 22 08:41:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.101845883 +0000 UTC m=+0.026316806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.206689608 +0000 UTC m=+0.131160541 container init d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.212867802 +0000 UTC m=+0.137338715 container start d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.216174704 +0000 UTC m=+0.140645637 container attach d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:41:08 np0005592157 nice_tharp[114138]: 167 167
Jan 22 08:41:08 np0005592157 systemd[1]: libpod-d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2.scope: Deactivated successfully.
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.218578664 +0000 UTC m=+0.143049567 container died d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:41:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2e2d9ca7fc7a9eeb7712f8107a38e6b9076267d79e0fee817b650b348c7e5acb-merged.mount: Deactivated successfully.
Jan 22 08:41:08 np0005592157 podman[114095]: 2026-01-22 13:41:08.263943551 +0000 UTC m=+0.188414444 container remove d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 22 08:41:08 np0005592157 systemd[1]: libpod-conmon-d118fb5ddfbd939d2e20f0b17dcb0c376494b96c028340e31649b3686b38b0a2.scope: Deactivated successfully.
Jan 22 08:41:08 np0005592157 podman[114186]: 2026-01-22 13:41:08.468168777 +0000 UTC m=+0.056922246 container create f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:41:08 np0005592157 systemd[1]: Started libpod-conmon-f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208.scope.
Jan 22 08:41:08 np0005592157 podman[114186]: 2026-01-22 13:41:08.449074772 +0000 UTC m=+0.037828301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:41:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:41:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cbe726d4e1cce5a63d260e41cce7a35522152948250532e04c6f114791b86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cbe726d4e1cce5a63d260e41cce7a35522152948250532e04c6f114791b86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cbe726d4e1cce5a63d260e41cce7a35522152948250532e04c6f114791b86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cbe726d4e1cce5a63d260e41cce7a35522152948250532e04c6f114791b86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:41:08 np0005592157 podman[114186]: 2026-01-22 13:41:08.552502072 +0000 UTC m=+0.141255581 container init f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:41:08 np0005592157 podman[114186]: 2026-01-22 13:41:08.567767282 +0000 UTC m=+0.156520801 container start f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:41:08 np0005592157 podman[114186]: 2026-01-22 13:41:08.572010277 +0000 UTC m=+0.160763806 container attach f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:41:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592157 happy_bohr[114202]: {
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:        "osd_id": 0,
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:        "type": "bluestore"
Jan 22 08:41:09 np0005592157 happy_bohr[114202]:    }
Jan 22 08:41:09 np0005592157 happy_bohr[114202]: }
Jan 22 08:41:09 np0005592157 systemd[1]: libpod-f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208.scope: Deactivated successfully.
Jan 22 08:41:09 np0005592157 podman[114186]: 2026-01-22 13:41:09.531545653 +0000 UTC m=+1.120299142 container died f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:41:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-878cbe726d4e1cce5a63d260e41cce7a35522152948250532e04c6f114791b86-merged.mount: Deactivated successfully.
Jan 22 08:41:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:09 np0005592157 podman[114186]: 2026-01-22 13:41:09.608366942 +0000 UTC m=+1.197120431 container remove f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:41:09 np0005592157 systemd[1]: libpod-conmon-f9268523a3cbc9cd65f3006066a153338f8746b0ce2dee6a6aa59ebae35b6208.scope: Deactivated successfully.
Jan 22 08:41:09 np0005592157 python3.9[114343]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eb437ea0-f660-4b1e-95f0-857989dd9ca8 does not exist
Jan 22 08:41:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6b488cdc-61bf-4b45-94ad-af653d9ba6bf does not exist
Jan 22 08:41:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4d09a7e8-257f-4ab6-bc90-d8b3cc326679 does not exist
Jan 22 08:41:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:09.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:09.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:10 np0005592157 python3.9[114491]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:10 np0005592157 python3.9[114643]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:11 np0005592157 python3.9[114722]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:11.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:11.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:13 np0005592157 python3.9[114874]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:13.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:41:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:13.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:41:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:15 np0005592157 python3.9[115027]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:41:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:15.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:16 np0005592157 python3.9[115112]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:17 np0005592157 systemd-logind[785]: Session 38 logged out. Waiting for processes to exit.
Jan 22 08:41:17 np0005592157 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 08:41:17 np0005592157 systemd[1]: session-38.scope: Consumed 27.173s CPU time.
Jan 22 08:41:17 np0005592157 systemd-logind[785]: Removed session 38.
Jan 22 08:41:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:21.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:22 np0005592157 systemd-logind[785]: New session 39 of user zuul.
Jan 22 08:41:22 np0005592157 systemd[1]: Started Session 39 of User zuul.
Jan 22 08:41:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:23 np0005592157 python3.9[115298]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:23.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:24 np0005592157 python3.9[115450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:25 np0005592157 python3.9[115528]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:25.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:25 np0005592157 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 08:41:25 np0005592157 systemd[1]: session-39.scope: Consumed 1.856s CPU time.
Jan 22 08:41:25 np0005592157 systemd-logind[785]: Session 39 logged out. Waiting for processes to exit.
Jan 22 08:41:25 np0005592157 systemd-logind[785]: Removed session 39.
Jan 22 08:41:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:25.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:27.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:29.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:30.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:31 np0005592157 systemd-logind[785]: New session 40 of user zuul.
Jan 22 08:41:31 np0005592157 systemd[1]: Started Session 40 of User zuul.
Jan 22 08:41:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:41:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:32.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:41:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:32 np0005592157 python3.9[115760]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:41:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:33 np0005592157 python3.9[115917]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:41:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:41:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:34 np0005592157 python3.9[116092]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:35 np0005592157 python3.9[116170]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.adlq0mdk recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592157 python3.9[116323]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592157 python3.9[116401]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.qrrktfqp recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:38 np0005592157 python3.9[116554]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:38 np0005592157 python3.9[116706]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:39 np0005592157 python3.9[116784]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:39.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:40 np0005592157 python3.9[116937]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:40 np0005592157 python3.9[117015]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:41 np0005592157 python3.9[117168]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:41:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:42.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:41:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:42 np0005592157 python3.9[117320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:43 np0005592157 python3.9[117398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:41:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:41:43 np0005592157 python3.9[117551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:44 np0005592157 python3.9[117629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:45.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:45 np0005592157 python3.9[117782]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:45 np0005592157 systemd[1]: Reloading.
Jan 22 08:41:46 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:41:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:41:46 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:41:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:46 np0005592157 python3.9[117972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:41:47
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:41:47 np0005592157 python3.9[118050]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:48 np0005592157 python3.9[118203]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592157 python3.9[118331]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:49 np0005592157 python3.9[118484]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:49 np0005592157 systemd[1]: Reloading.
Jan 22 08:41:49 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:49 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:49 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:50 np0005592157 systemd[1]: Starting Create netns directory...
Jan 22 08:41:50 np0005592157 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:41:50 np0005592157 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:41:50 np0005592157 systemd[1]: Finished Create netns directory.
Jan 22 08:41:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:51 np0005592157 python3.9[118675]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:41:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:51.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:52 np0005592157 network[118693]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:41:52 np0005592157 network[118694]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:41:52 np0005592157 network[118695]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:41:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:41:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:41:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:55.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:56.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2153 writes, 9820 keys, 2153 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2153 writes, 2153 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2153 writes, 9820 keys, 2153 commit groups, 1.0 writes per commit group, ingest: 12.84 MB, 0.02 MB/s#012Interval WAL: 2153 writes, 2153 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.5      0.09              0.03         2    0.047       0      0       0.0       0.0#012  L6      1/0    8.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0    129.7    129.0      0.07              0.03         1    0.065    3870    292       0.0       0.0#012 Sum      1/0    8.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     53.0    105.6      0.16              0.06         3    0.053    3870    292       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     54.4    108.2      0.15              0.06         2    0.077    3870    292       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    129.7    129.0      0.07              0.03         1    0.065    3870    292       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.1      0.09              0.03         1    0.090       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 403.62 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000135 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(25,343.62 KB,0.110385%) FilterBlock(4,19.48 KB,0.00625912%) IndexBlock(4,40.52 KB,0.0130151%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:41:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:41:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:58.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:41:58 np0005592157 python3.9[118960]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:58 np0005592157 python3.9[119038]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:41:59 np0005592157 python3.9[119191]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:41:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:59.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:00 np0005592157 python3.9[119343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:01 np0005592157 python3.9[119421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:01.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:02.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:02 np0005592157 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 08:42:02 np0005592157 systemd[1]: session-18.scope: Consumed 1min 44.695s CPU time.
Jan 22 08:42:02 np0005592157 systemd-logind[785]: Session 18 logged out. Waiting for processes to exit.
Jan 22 08:42:02 np0005592157 systemd-logind[785]: Removed session 18.
Jan 22 08:42:02 np0005592157 python3.9[119574]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:42:02 np0005592157 systemd[1]: Starting Time & Date Service...
Jan 22 08:42:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:02 np0005592157 systemd[1]: Started Time & Date Service.
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:42:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:03 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:03 np0005592157 python3.9[119731]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:04.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:04 np0005592157 python3.9[119883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:05 np0005592157 python3.9[119961]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:05.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:06 np0005592157 python3.9[120114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:06 np0005592157 python3.9[120192]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0yhy7c5g recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:07 np0005592157 python3.9[120345]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:07.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:08 np0005592157 python3.9[120423]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:09 np0005592157 python3.9[120625]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:09.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:10 np0005592157 python3[120779]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:42:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:11 np0005592157 python3.9[121061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:42:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:42:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:11 np0005592157 python3.9[121140]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:12.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:42:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:42:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:12 np0005592157 python3.9[121292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b13503eb-f07c-4399-95aa-88ea24635d9e does not exist
Jan 22 08:42:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8bf7ee80-b197-4fed-b6e7-3c11ecb269a0 does not exist
Jan 22 08:42:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 34b7ddb1-db67-42be-a781-27d7578a9016 does not exist
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:42:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:42:13 np0005592157 python3.9[121466]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089331.9610116-900-263333980609211/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.803085404 +0000 UTC m=+0.062101410 container create 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:42:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:13.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:13 np0005592157 systemd[1]: Started libpod-conmon-46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03.scope.
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.77187862 +0000 UTC m=+0.030894686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.911354921 +0000 UTC m=+0.170370977 container init 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.924104913 +0000 UTC m=+0.183120929 container start 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.930442078 +0000 UTC m=+0.189458154 container attach 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:42:13 np0005592157 focused_volhard[121608]: 167 167
Jan 22 08:42:13 np0005592157 systemd[1]: libpod-46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03.scope: Deactivated successfully.
Jan 22 08:42:13 np0005592157 conmon[121608]: conmon 46c52cc99a804cfcea41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03.scope/container/memory.events
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.93623195 +0000 UTC m=+0.195247956 container died 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:42:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-94a9d1db423d8a396cab09539c50d8e370eac82a8b7314f54b991a8e0cc67a34-merged.mount: Deactivated successfully.
Jan 22 08:42:13 np0005592157 podman[121584]: 2026-01-22 13:42:13.99760891 +0000 UTC m=+0.256624916 container remove 46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:42:14 np0005592157 systemd[1]: libpod-conmon-46c52cc99a804cfcea417749ff54e1f6af06cfa8ba0c664b16feaf1420000b03.scope: Deactivated successfully.
Jan 22 08:42:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:14.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:42:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:42:14 np0005592157 podman[121699]: 2026-01-22 13:42:14.208908238 +0000 UTC m=+0.053654654 container create 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:42:14 np0005592157 systemd[1]: Started libpod-conmon-8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559.scope.
Jan 22 08:42:14 np0005592157 podman[121699]: 2026-01-22 13:42:14.186679224 +0000 UTC m=+0.031425660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:14 np0005592157 podman[121699]: 2026-01-22 13:42:14.309512808 +0000 UTC m=+0.154259244 container init 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:42:14 np0005592157 podman[121699]: 2026-01-22 13:42:14.321694896 +0000 UTC m=+0.166441292 container start 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:42:14 np0005592157 podman[121699]: 2026-01-22 13:42:14.32720644 +0000 UTC m=+0.171952836 container attach 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:42:14 np0005592157 python3.9[121773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:15 np0005592157 blissful_bell[121740]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:42:15 np0005592157 blissful_bell[121740]: --> relative data size: 1.0
Jan 22 08:42:15 np0005592157 blissful_bell[121740]: --> All data devices are unavailable
Jan 22 08:42:15 np0005592157 systemd[1]: libpod-8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559.scope: Deactivated successfully.
Jan 22 08:42:15 np0005592157 podman[121699]: 2026-01-22 13:42:15.193403182 +0000 UTC m=+1.038149608 container died 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:42:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f330a99640a696e58a0e0dc823f538e9fa940a7b2a2360e0dc3d83dd343e592f-merged.mount: Deactivated successfully.
Jan 22 08:42:15 np0005592157 python3.9[121857]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:15 np0005592157 podman[121699]: 2026-01-22 13:42:15.288911928 +0000 UTC m=+1.133658324 container remove 8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:42:15 np0005592157 systemd[1]: libpod-conmon-8d64fa16a55b3a47cbf766453ad012b480c5019b5ab9ede1328035941d318559.scope: Deactivated successfully.
Jan 22 08:42:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:15.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:16 np0005592157 podman[122160]: 2026-01-22 13:42:16.017029073 +0000 UTC m=+0.057156819 container create 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:42:16 np0005592157 systemd[1]: Started libpod-conmon-65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26.scope.
Jan 22 08:42:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:16.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:16 np0005592157 podman[122160]: 2026-01-22 13:42:15.990787621 +0000 UTC m=+0.030915417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:16 np0005592157 podman[122160]: 2026-01-22 13:42:16.106200024 +0000 UTC m=+0.146327810 container init 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:42:16 np0005592157 podman[122160]: 2026-01-22 13:42:16.118230988 +0000 UTC m=+0.158358764 container start 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:42:16 np0005592157 podman[122160]: 2026-01-22 13:42:16.122100603 +0000 UTC m=+0.162228389 container attach 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:42:16 np0005592157 quirky_williams[122183]: 167 167
Jan 22 08:42:16 np0005592157 systemd[1]: libpod-65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26.scope: Deactivated successfully.
Jan 22 08:42:16 np0005592157 podman[122188]: 2026-01-22 13:42:16.177581779 +0000 UTC m=+0.032038814 container died 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:42:16 np0005592157 python3.9[122178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d64df166f8364a2f010cc93d7687468db36156fccffdada4ca841d87a0e50424-merged.mount: Deactivated successfully.
Jan 22 08:42:16 np0005592157 podman[122188]: 2026-01-22 13:42:16.214171474 +0000 UTC m=+0.068628489 container remove 65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:42:16 np0005592157 systemd[1]: libpod-conmon-65eeb4e08e8a1e25ccf2204ca13655d2a580bf5f119806d8fca798aefa5e9f26.scope: Deactivated successfully.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.386841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336387094, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 3509229, "memory_usage": 3568576, "flush_reason": "Manual Compaction"}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336421049, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3384523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8119, "largest_seqno": 10468, "table_properties": {"data_size": 3374668, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25170, "raw_average_key_size": 21, "raw_value_size": 3352581, "raw_average_value_size": 2826, "num_data_blocks": 255, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089177, "oldest_key_time": 1769089177, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 34256 microseconds, and 16353 cpu microseconds.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421215) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3384523 bytes OK
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421286) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.423620) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.423683) EVENT_LOG_v1 {"time_micros": 1769089336423673, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.423709) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3499068, prev total WAL file size 3499068, number of live WAL files 2.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.425024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3305KB)], [20(8588KB)]
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336425242, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12179636, "oldest_snapshot_seqno": -1}
Jan 22 08:42:16 np0005592157 podman[122233]: 2026-01-22 13:42:16.448577356 +0000 UTC m=+0.055708763 container create 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 4241 keys, 10523606 bytes, temperature: kUnknown
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336505205, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 10523606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10489290, "index_size": 22622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103658, "raw_average_key_size": 24, "raw_value_size": 10406602, "raw_average_value_size": 2453, "num_data_blocks": 980, "num_entries": 4241, "num_filter_entries": 4241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505686) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10523606 bytes
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.506999) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 4764, records dropped: 523 output_compression: NoCompression
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.507023) EVENT_LOG_v1 {"time_micros": 1769089336507012, "job": 6, "event": "compaction_finished", "compaction_time_micros": 80215, "compaction_time_cpu_micros": 37767, "output_level": 6, "num_output_files": 1, "total_output_size": 10523606, "num_input_records": 4764, "num_output_records": 4241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336507865, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336509751, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.424759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.509835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.509842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.509844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.509846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:16.509848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592157 podman[122233]: 2026-01-22 13:42:16.423241047 +0000 UTC m=+0.030372524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:16 np0005592157 systemd[1]: Started libpod-conmon-6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25.scope.
Jan 22 08:42:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e187dfdd489bb8bbf8ff66ad8178c2d79889e761b8388e022017f1b9a6347c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e187dfdd489bb8bbf8ff66ad8178c2d79889e761b8388e022017f1b9a6347c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e187dfdd489bb8bbf8ff66ad8178c2d79889e761b8388e022017f1b9a6347c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e187dfdd489bb8bbf8ff66ad8178c2d79889e761b8388e022017f1b9a6347c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:16 np0005592157 podman[122233]: 2026-01-22 13:42:16.598774659 +0000 UTC m=+0.205906066 container init 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:42:16 np0005592157 podman[122233]: 2026-01-22 13:42:16.608454546 +0000 UTC m=+0.215585933 container start 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 08:42:16 np0005592157 podman[122233]: 2026-01-22 13:42:16.612345231 +0000 UTC m=+0.219476638 container attach 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 08:42:16 np0005592157 python3.9[122308]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]: {
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:    "0": [
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:        {
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "devices": [
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "/dev/loop3"
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            ],
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "lv_name": "ceph_lv0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "lv_size": "7511998464",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "name": "ceph_lv0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "tags": {
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.cluster_name": "ceph",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.crush_device_class": "",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.encrypted": "0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.osd_id": "0",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.type": "block",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:                "ceph.vdo": "0"
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            },
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "type": "block",
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:            "vg_name": "ceph_vg0"
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:        }
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]:    ]
Jan 22 08:42:17 np0005592157 gracious_meitner[122275]: }
Jan 22 08:42:17 np0005592157 podman[122233]: 2026-01-22 13:42:17.380364181 +0000 UTC m=+0.987495568 container died 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:42:17 np0005592157 systemd[1]: libpod-6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25.scope: Deactivated successfully.
Jan 22 08:42:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f7e187dfdd489bb8bbf8ff66ad8178c2d79889e761b8388e022017f1b9a6347c-merged.mount: Deactivated successfully.
Jan 22 08:42:17 np0005592157 podman[122233]: 2026-01-22 13:42:17.459013684 +0000 UTC m=+1.066145061 container remove 6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:42:17 np0005592157 systemd[1]: libpod-conmon-6f2db16daf808e0faf07605bbfbd5487bf3456d813b3bb62fdb231ad292f2d25.scope: Deactivated successfully.
Jan 22 08:42:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:17 np0005592157 python3.9[122506]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:42:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:42:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.167497319 +0000 UTC m=+0.045624656 container create 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 08:42:18 np0005592157 systemd[1]: Started libpod-conmon-2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7.scope.
Jan 22 08:42:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.148994407 +0000 UTC m=+0.027121774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.250019207 +0000 UTC m=+0.128146574 container init 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.258650388 +0000 UTC m=+0.136777735 container start 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.26280913 +0000 UTC m=+0.140936467 container attach 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:42:18 np0005592157 xenodochial_kirch[122714]: 167 167
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.266710056 +0000 UTC m=+0.144837393 container died 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:42:18 np0005592157 systemd[1]: libpod-2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7.scope: Deactivated successfully.
Jan 22 08:42:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8d927c813edcc1d46b7124799381e58f0948ae1fc214ffa5720c717acaeed21a-merged.mount: Deactivated successfully.
Jan 22 08:42:18 np0005592157 podman[122694]: 2026-01-22 13:42:18.306712474 +0000 UTC m=+0.184839801 container remove 2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:42:18 np0005592157 systemd[1]: libpod-conmon-2b7e8e7fd52e8e4cf142237424007274bcf64584267e68c0e274ddb0971bd9f7.scope: Deactivated successfully.
Jan 22 08:42:18 np0005592157 python3.9[122705]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:18 np0005592157 podman[122742]: 2026-01-22 13:42:18.472869117 +0000 UTC m=+0.040880411 container create 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:42:18 np0005592157 systemd[1]: Started libpod-conmon-0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154.scope.
Jan 22 08:42:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:42:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ddee9c0d6218fb3777fee218c59ea04a468c568c60660912d89241160e44b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ddee9c0d6218fb3777fee218c59ea04a468c568c60660912d89241160e44b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ddee9c0d6218fb3777fee218c59ea04a468c568c60660912d89241160e44b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ddee9c0d6218fb3777fee218c59ea04a468c568c60660912d89241160e44b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:42:18 np0005592157 podman[122742]: 2026-01-22 13:42:18.456182849 +0000 UTC m=+0.024194163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:42:18 np0005592157 podman[122742]: 2026-01-22 13:42:18.562854977 +0000 UTC m=+0.130866311 container init 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 22 08:42:18 np0005592157 podman[122742]: 2026-01-22 13:42:18.574025671 +0000 UTC m=+0.142037015 container start 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:42:18 np0005592157 podman[122742]: 2026-01-22 13:42:18.577645379 +0000 UTC m=+0.145656683 container attach 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 22 08:42:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:19 np0005592157 python3.9[122908]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]: {
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:        "osd_id": 0,
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:        "type": "bluestore"
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]:    }
Jan 22 08:42:19 np0005592157 dreamy_jepsen[122776]: }
Jan 22 08:42:19 np0005592157 systemd[1]: libpod-0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154.scope: Deactivated successfully.
Jan 22 08:42:19 np0005592157 podman[122742]: 2026-01-22 13:42:19.539023129 +0000 UTC m=+1.107034443 container died 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:42:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fd5ddee9c0d6218fb3777fee218c59ea04a468c568c60660912d89241160e44b-merged.mount: Deactivated successfully.
Jan 22 08:42:19 np0005592157 podman[122742]: 2026-01-22 13:42:19.614112315 +0000 UTC m=+1.182123609 container remove 0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jepsen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 22 08:42:19 np0005592157 systemd[1]: libpod-conmon-0f123983d8b3fc24ecdf169f7b58f41706e859bb1283744b816320e3ca5e8154.scope: Deactivated successfully.
Jan 22 08:42:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:42:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:42:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:20.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:20 np0005592157 python3.9[123091]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3cb0276a-2272-4b9a-8cd5-3482221ff97f does not exist
Jan 22 08:42:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 62e36c5e-65b7-418e-bae8-93d7f8928201 does not exist
Jan 22 08:42:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ebf5d634-789d-4386-8b80-22701df8b6cd does not exist
Jan 22 08:42:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592157 python3.9[123293]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:22 np0005592157 python3.9[123446]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.758207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342759164, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 329, "num_deletes": 250, "total_data_size": 164068, "memory_usage": 170080, "flush_reason": "Manual Compaction"}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342764618, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 162793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10469, "largest_seqno": 10797, "table_properties": {"data_size": 160615, "index_size": 342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5684, "raw_average_key_size": 19, "raw_value_size": 156292, "raw_average_value_size": 533, "num_data_blocks": 14, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089337, "oldest_key_time": 1769089337, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6411 microseconds, and 2960 cpu microseconds.
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.764670) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 162793 bytes OK
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.764694) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.766041) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.766065) EVENT_LOG_v1 {"time_micros": 1769089342766057, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.766096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 161767, prev total WAL file size 161767, number of live WAL files 2.
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.766750) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(158KB)], [23(10MB)]
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342766827, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10686399, "oldest_snapshot_seqno": -1}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4023 keys, 7891817 bytes, temperature: kUnknown
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342856034, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7891817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862516, "index_size": 18119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99705, "raw_average_key_size": 24, "raw_value_size": 7787082, "raw_average_value_size": 1935, "num_data_blocks": 782, "num_entries": 4023, "num_filter_entries": 4023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.856333) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7891817 bytes
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.857693) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.7 rd, 88.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(114.1) write-amplify(48.5) OK, records in: 4534, records dropped: 511 output_compression: NoCompression
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.857716) EVENT_LOG_v1 {"time_micros": 1769089342857706, "job": 8, "event": "compaction_finished", "compaction_time_micros": 89305, "compaction_time_cpu_micros": 38571, "output_level": 6, "num_output_files": 1, "total_output_size": 7891817, "num_input_records": 4534, "num_output_records": 4023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342857848, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342859580, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.766618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.859706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.859716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.859722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.859726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:42:22.859730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:23 np0005592157 python3.9[123598]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:23 np0005592157 python3.9[123751]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592157 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 08:42:24 np0005592157 systemd[1]: session-40.scope: Consumed 34.760s CPU time.
Jan 22 08:42:24 np0005592157 systemd-logind[785]: Session 40 logged out. Waiting for processes to exit.
Jan 22 08:42:24 np0005592157 systemd-logind[785]: Removed session 40.
Jan 22 08:42:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:27.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:28 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:29.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:32 np0005592157 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:42:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592157 systemd-logind[785]: New session 41 of user zuul.
Jan 22 08:42:35 np0005592157 systemd[1]: Started Session 41 of User zuul.
Jan 22 08:42:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:35.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:37 np0005592157 python3.9[123990]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:42:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:38 np0005592157 python3.9[124142]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:40 np0005592157 python3.9[124297]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 08:42:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:41 np0005592157 python3.9[124450]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.a23olt75 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:42:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:42:42 np0005592157 python3.9[124575]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.a23olt75 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089361.1163242-107-68166800952475/.source.a23olt75 _original_basename=.5do4zgk5 follow=False checksum=9893b3bde8503c371031e4467aece9772279f87c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:43 np0005592157 python3.9[124728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:44.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592157 python3.9[124880]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012 create=True mode=0644 path=/tmp/ansible.a23olt75 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:42:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:45.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:42:46 np0005592157 python3.9[125033]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.a23olt75' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:42:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:47 np0005592157 python3.9[125187]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.a23olt75 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:42:47
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'vms']
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:42:47 np0005592157 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 08:42:47 np0005592157 systemd[1]: session-41.scope: Consumed 6.057s CPU time.
Jan 22 08:42:47 np0005592157 systemd-logind[785]: Session 41 logged out. Waiting for processes to exit.
Jan 22 08:42:47 np0005592157 systemd-logind[785]: Removed session 41.
Jan 22 08:42:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:47 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:48.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:52.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:52 np0005592157 systemd-logind[785]: New session 42 of user zuul.
Jan 22 08:42:52 np0005592157 systemd[1]: Started Session 42 of User zuul.
Jan 22 08:42:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:54 np0005592157 python3.9[125419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:55 np0005592157 python3.9[125576]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:42:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:55.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:56.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:56 np0005592157 python3.9[125730]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:42:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:58 np0005592157 python3.9[125884]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:58.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:42:59 np0005592157 python3.9[126037]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:42:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:42:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:42:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:00 np0005592157 python3.9[126190]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:00.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:00 np0005592157 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 08:43:00 np0005592157 systemd[1]: session-42.scope: Consumed 4.831s CPU time.
Jan 22 08:43:00 np0005592157 systemd-logind[785]: Session 42 logged out. Waiting for processes to exit.
Jan 22 08:43:00 np0005592157 systemd-logind[785]: Removed session 42.
Jan 22 08:43:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:01.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:02.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:02 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:43:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:03.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:04.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:06 np0005592157 systemd-logind[785]: New session 43 of user zuul.
Jan 22 08:43:06 np0005592157 systemd[1]: Started Session 43 of User zuul.
Jan 22 08:43:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:07 np0005592157 python3.9[126372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:08.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592157 python3.9[126529]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:43:09 np0005592157 python3.9[126661]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:43:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:09 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:11 np0005592157 python3.9[126817]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:43:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:11.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:13 np0005592157 python3.9[126968]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:43:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:14 np0005592157 python3.9[127119]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:14.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592157 python3.9[127269]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:15 np0005592157 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 08:43:15 np0005592157 systemd[1]: session-43.scope: Consumed 6.550s CPU time.
Jan 22 08:43:15 np0005592157 systemd-logind[785]: Session 43 logged out. Waiting for processes to exit.
Jan 22 08:43:15 np0005592157 systemd-logind[785]: Removed session 43.
Jan 22 08:43:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:16.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:21 np0005592157 systemd-logind[785]: New session 44 of user zuul.
Jan 22 08:43:21 np0005592157 systemd[1]: Started Session 44 of User zuul.
Jan 22 08:43:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:43:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:43:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592157 python3.9[127668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:43:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:43:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 25f935f0-eddf-4cb1-b5bb-e1f5b2eb9f65 does not exist
Jan 22 08:43:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 01fc0982-fb84-4c6d-adc8-f90b125843d3 does not exist
Jan 22 08:43:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4fbd15cc-d123-4b52-847b-16ee02a88064 does not exist
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:43:24 np0005592157 python3.9[127861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:43:24 np0005592157 podman[128150]: 2026-01-22 13:43:24.863197088 +0000 UTC m=+0.058211398 container create 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:43:24 np0005592157 systemd[1]: Started libpod-conmon-8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6.scope.
Jan 22 08:43:24 np0005592157 podman[128150]: 2026-01-22 13:43:24.835402632 +0000 UTC m=+0.030416982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:24 np0005592157 python3.9[128149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:24 np0005592157 podman[128150]: 2026-01-22 13:43:24.991596688 +0000 UTC m=+0.186611008 container init 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:43:25 np0005592157 podman[128150]: 2026-01-22 13:43:25.003872242 +0000 UTC m=+0.198886542 container start 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:43:25 np0005592157 podman[128150]: 2026-01-22 13:43:25.008068685 +0000 UTC m=+0.203083045 container attach 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:43:25 np0005592157 magical_keller[128167]: 167 167
Jan 22 08:43:25 np0005592157 podman[128150]: 2026-01-22 13:43:25.010373122 +0000 UTC m=+0.205387422 container died 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:43:25 np0005592157 systemd[1]: libpod-8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6.scope: Deactivated successfully.
Jan 22 08:43:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-84fe6ff6cc795a4c9ca036347184c09207ae83f8479527c82bf562d030a81528-merged.mount: Deactivated successfully.
Jan 22 08:43:25 np0005592157 podman[128150]: 2026-01-22 13:43:25.067426071 +0000 UTC m=+0.262440351 container remove 8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keller, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:43:25 np0005592157 systemd[1]: libpod-conmon-8d7a57a684b1d75051c749597d663031a924461167dbd9e7102b44aca67ea5f6.scope: Deactivated successfully.
Jan 22 08:43:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:25 np0005592157 podman[128237]: 2026-01-22 13:43:25.282288077 +0000 UTC m=+0.050817366 container create 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 22 08:43:25 np0005592157 systemd[1]: Started libpod-conmon-48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee.scope.
Jan 22 08:43:25 np0005592157 podman[128237]: 2026-01-22 13:43:25.257525666 +0000 UTC m=+0.026054985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:25 np0005592157 podman[128237]: 2026-01-22 13:43:25.385599919 +0000 UTC m=+0.154129298 container init 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:43:25 np0005592157 podman[128237]: 2026-01-22 13:43:25.400364443 +0000 UTC m=+0.168893722 container start 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:43:25 np0005592157 podman[128237]: 2026-01-22 13:43:25.407036578 +0000 UTC m=+0.175565897 container attach 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:43:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:25 np0005592157 python3.9[128363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:26 np0005592157 flamboyant_joliot[128282]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:43:26 np0005592157 flamboyant_joliot[128282]: --> relative data size: 1.0
Jan 22 08:43:26 np0005592157 flamboyant_joliot[128282]: --> All data devices are unavailable
Jan 22 08:43:26 np0005592157 systemd[1]: libpod-48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee.scope: Deactivated successfully.
Jan 22 08:43:26 np0005592157 podman[128237]: 2026-01-22 13:43:26.268690778 +0000 UTC m=+1.037220067 container died 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:43:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1265d7d4f57efa32508c780f0efa57820e84709610b1e989aeee88cf5740f466-merged.mount: Deactivated successfully.
Jan 22 08:43:26 np0005592157 podman[128237]: 2026-01-22 13:43:26.349569045 +0000 UTC m=+1.118098334 container remove 48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_joliot, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:43:26 np0005592157 systemd[1]: libpod-conmon-48f3753a9d8c62c00179849279b94382941525ff016c78c6eb2dbb00119b3aee.scope: Deactivated successfully.
Jan 22 08:43:26 np0005592157 python3.9[128510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089405.2084444-161-95923022604427/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1dc74faeb402ada1df12a530955009044b2d9cfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.095526638 +0000 UTC m=+0.040202164 container create 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:43:27 np0005592157 systemd[1]: Started libpod-conmon-4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b.scope.
Jan 22 08:43:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.077208595 +0000 UTC m=+0.021884111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.18024151 +0000 UTC m=+0.124917076 container init 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.186168146 +0000 UTC m=+0.130843632 container start 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.189409906 +0000 UTC m=+0.134085492 container attach 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:43:27 np0005592157 systemd[1]: libpod-4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b.scope: Deactivated successfully.
Jan 22 08:43:27 np0005592157 keen_matsumoto[128819]: 167 167
Jan 22 08:43:27 np0005592157 conmon[128819]: conmon 4a394acef8ef15cf6c5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b.scope/container/memory.events
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.194749918 +0000 UTC m=+0.139425434 container died 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:43:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6033c47d00c432e7bd549db8f4222ff471a511a5c17840623e2c8adcf11c498d-merged.mount: Deactivated successfully.
Jan 22 08:43:27 np0005592157 podman[128784]: 2026-01-22 13:43:27.251750006 +0000 UTC m=+0.196425542 container remove 4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:43:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:27.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:27 np0005592157 systemd[1]: libpod-conmon-4a394acef8ef15cf6c5b1396b5cdbaa658f0220eb75e8ca3b77d4dc3ed13fd4b.scope: Deactivated successfully.
Jan 22 08:43:27 np0005592157 python3.9[128815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:27 np0005592157 podman[128858]: 2026-01-22 13:43:27.499267169 +0000 UTC m=+0.068423211 container create f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:43:27 np0005592157 systemd[1]: Started libpod-conmon-f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc.scope.
Jan 22 08:43:27 np0005592157 podman[128858]: 2026-01-22 13:43:27.469832792 +0000 UTC m=+0.038988864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26b1e58a2e28865f8be38fd05c0ec03862836ec680251db2ae957a18621209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26b1e58a2e28865f8be38fd05c0ec03862836ec680251db2ae957a18621209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26b1e58a2e28865f8be38fd05c0ec03862836ec680251db2ae957a18621209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26b1e58a2e28865f8be38fd05c0ec03862836ec680251db2ae957a18621209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:27 np0005592157 podman[128858]: 2026-01-22 13:43:27.612419633 +0000 UTC m=+0.181575705 container init f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:43:27 np0005592157 podman[128858]: 2026-01-22 13:43:27.624702926 +0000 UTC m=+0.193858958 container start f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:43:27 np0005592157 podman[128858]: 2026-01-22 13:43:27.629539676 +0000 UTC m=+0.198695698 container attach f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:43:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:28 np0005592157 python3.9[128988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089406.7808619-161-64172780844955/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cc1c70588824ebebf3437effcc8b7daf397d0332 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]: {
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:    "0": [
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:        {
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "devices": [
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "/dev/loop3"
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            ],
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "lv_name": "ceph_lv0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "lv_size": "7511998464",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "name": "ceph_lv0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "tags": {
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.cluster_name": "ceph",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.crush_device_class": "",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.encrypted": "0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.osd_id": "0",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.type": "block",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:                "ceph.vdo": "0"
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            },
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "type": "block",
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:            "vg_name": "ceph_vg0"
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:        }
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]:    ]
Jan 22 08:43:28 np0005592157 romantic_lichterman[128909]: }
Jan 22 08:43:28 np0005592157 systemd[1]: libpod-f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc.scope: Deactivated successfully.
Jan 22 08:43:28 np0005592157 podman[128858]: 2026-01-22 13:43:28.45045331 +0000 UTC m=+1.019609342 container died f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:43:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5d26b1e58a2e28865f8be38fd05c0ec03862836ec680251db2ae957a18621209-merged.mount: Deactivated successfully.
Jan 22 08:43:28 np0005592157 podman[128858]: 2026-01-22 13:43:28.533997543 +0000 UTC m=+1.103153555 container remove f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 08:43:28 np0005592157 systemd[1]: libpod-conmon-f0c09168585d3b231265d8e4c54b2602151af9f10f334ad47ae5fdbbe901fcbc.scope: Deactivated successfully.
Jan 22 08:43:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:28 np0005592157 python3.9[129158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.26195969 +0000 UTC m=+0.052463927 container create 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:43:29 np0005592157 python3.9[129396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089408.1997888-161-145352523564153/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ef92e92bfeadeb5ce3dc9b85445806663ffc7cfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:29 np0005592157 systemd[1]: Started libpod-conmon-55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733.scope.
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.233372154 +0000 UTC m=+0.023876421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.352479165 +0000 UTC m=+0.142983432 container init 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.358675468 +0000 UTC m=+0.149179695 container start 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.362255957 +0000 UTC m=+0.152760224 container attach 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:43:29 np0005592157 agitated_wright[129463]: 167 167
Jan 22 08:43:29 np0005592157 systemd[1]: libpod-55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733.scope: Deactivated successfully.
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.364902112 +0000 UTC m=+0.155406349 container died 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:43:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7f8e64f342da05b6e2eaf6da57b6d08772f4cc3d62dbea04634e735c5aac42f3-merged.mount: Deactivated successfully.
Jan 22 08:43:29 np0005592157 podman[129423]: 2026-01-22 13:43:29.405258259 +0000 UTC m=+0.195762486 container remove 55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:43:29 np0005592157 systemd[1]: libpod-conmon-55fdfaf9c5d9a61555ed930566d2d1ef4b8e502356d8b9a9b1062f0054697733.scope: Deactivated successfully.
Jan 22 08:43:29 np0005592157 podman[129546]: 2026-01-22 13:43:29.55268859 +0000 UTC m=+0.045995027 container create 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:43:29 np0005592157 systemd[1]: Started libpod-conmon-788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e.scope.
Jan 22 08:43:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:43:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade8ccdc8430c08a52e22b6841b7b289107273b8a1876f6013677e9c76d49313/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade8ccdc8430c08a52e22b6841b7b289107273b8a1876f6013677e9c76d49313/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade8ccdc8430c08a52e22b6841b7b289107273b8a1876f6013677e9c76d49313/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade8ccdc8430c08a52e22b6841b7b289107273b8a1876f6013677e9c76d49313/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:43:29 np0005592157 podman[129546]: 2026-01-22 13:43:29.529799554 +0000 UTC m=+0.023106051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:43:29 np0005592157 podman[129546]: 2026-01-22 13:43:29.632035039 +0000 UTC m=+0.125341496 container init 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:43:29 np0005592157 podman[129546]: 2026-01-22 13:43:29.63974081 +0000 UTC m=+0.133047257 container start 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:43:29 np0005592157 podman[129546]: 2026-01-22 13:43:29.64300563 +0000 UTC m=+0.136312077 container attach 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:43:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:29.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:30 np0005592157 python3.9[129688]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:30 np0005592157 great_curran[129607]: {
Jan 22 08:43:30 np0005592157 great_curran[129607]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:43:30 np0005592157 great_curran[129607]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:43:30 np0005592157 great_curran[129607]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:43:30 np0005592157 great_curran[129607]:        "osd_id": 0,
Jan 22 08:43:30 np0005592157 great_curran[129607]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:43:30 np0005592157 great_curran[129607]:        "type": "bluestore"
Jan 22 08:43:30 np0005592157 great_curran[129607]:    }
Jan 22 08:43:30 np0005592157 great_curran[129607]: }
Jan 22 08:43:30 np0005592157 systemd[1]: libpod-788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e.scope: Deactivated successfully.
Jan 22 08:43:30 np0005592157 podman[129546]: 2026-01-22 13:43:30.505411179 +0000 UTC m=+0.998717616 container died 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 08:43:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ade8ccdc8430c08a52e22b6841b7b289107273b8a1876f6013677e9c76d49313-merged.mount: Deactivated successfully.
Jan 22 08:43:30 np0005592157 podman[129546]: 2026-01-22 13:43:30.560562651 +0000 UTC m=+1.053869088 container remove 788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curran, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:43:30 np0005592157 systemd[1]: libpod-conmon-788b979bffb82a3263adf701fe42dfff194d60ed52e8e2dceccb93405962816e.scope: Deactivated successfully.
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7c29f0fd-67c2-44be-86c7-2fe1b10f5069 does not exist
Jan 22 08:43:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0ddbd723-86a1-4e49-a6f1-8954090430b3 does not exist
Jan 22 08:43:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 91c3e0bb-be83-4434-b27c-11e8c2d1b968 does not exist
Jan 22 08:43:30 np0005592157 python3.9[129868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:31 np0005592157 python3.9[130071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:32 np0005592157 python3.9[130194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089410.992273-346-154398674470658/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fa336eb49fd85444e842ec5954b3604c186d5e46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:32 np0005592157 python3.9[130346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:33 np0005592157 python3.9[130470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089412.3685555-346-18408081721281/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:34 np0005592157 python3.9[130622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:34 np0005592157 python3.9[130745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089413.7414157-346-222438910452604/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=25956a19eaa8a6aafebc35004d69be459ee96518 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:35 np0005592157 python3.9[130897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:35.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:36 np0005592157 python3.9[131050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:36 np0005592157 python3.9[131202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:37 np0005592157 python3.9[131326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089416.395657-532-203516614521707/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a12a0abe65fe3b0d73a1d7d85e7c7e8fb15a87cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:38 np0005592157 python3.9[131478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:38 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:38 np0005592157 python3.9[131601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089417.6724126-532-60324514857886/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:39 np0005592157 python3.9[131754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:40 np0005592157 python3.9[131877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089419.015539-532-66069478972176/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=abeebe337852ab0e24faf00f3be4b231e2044c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:41 np0005592157 python3.9[132029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:41.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:42 np0005592157 python3.9[132182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:42 np0005592157 python3.9[132305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089421.5606906-746-236347244160548/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:43 np0005592157 python3.9[132458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:44 np0005592157 python3.9[132610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:45 np0005592157 python3.9[132733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089423.7795632-829-181529851148068/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:45 np0005592157 python3.9[132886]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:45.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:43:46 np0005592157 python3.9[133038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:43:47
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'volumes', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:43:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:47 np0005592157 python3.9[133161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089426.063855-910-77910025270436/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:48 np0005592157 python3.9[133314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:48 np0005592157 python3.9[133466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:49 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:49 np0005592157 python3.9[133613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089428.3748615-989-152183588863023/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:49.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:50 np0005592157 python3.9[133792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:51 np0005592157 python3.9[133944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:43:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:43:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:51 np0005592157 python3.9[134068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089430.7596555-1070-211407524407196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:52 np0005592157 python3.9[134220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:43:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:43:53 np0005592157 python3.9[134372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:53.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:54 np0005592157 python3.9[134496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089432.9954824-1103-217074967915664/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:54 np0005592157 systemd-logind[785]: Session 44 logged out. Waiting for processes to exit.
Jan 22 08:43:54 np0005592157 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 08:43:54 np0005592157 systemd[1]: session-44.scope: Consumed 26.053s CPU time.
Jan 22 08:43:54 np0005592157 systemd-logind[785]: Removed session 44.
Jan 22 08:43:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:55.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:43:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:43:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:43:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:43:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:59.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:00 np0005592157 systemd-logind[785]: New session 45 of user zuul.
Jan 22 08:44:00 np0005592157 systemd[1]: Started Session 45 of User zuul.
Jan 22 08:44:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592157 python3.9[134679]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592157 python3.9[134834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:01.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:02 np0005592157 python3.9[134957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089441.297151-62-127474124919266/.source.conf _original_basename=ceph.conf follow=False checksum=c3a8ec6ec08fd3904e44a403280c0742b2934d96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:02 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:03.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:03 np0005592157 python3.9[135109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:44:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:04 np0005592157 python3.9[135233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089442.8451982-62-238880327491793/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:04 np0005592157 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 08:44:04 np0005592157 systemd[1]: session-45.scope: Consumed 2.923s CPU time.
Jan 22 08:44:04 np0005592157 systemd-logind[785]: Session 45 logged out. Waiting for processes to exit.
Jan 22 08:44:04 np0005592157 systemd-logind[785]: Removed session 45.
Jan 22 08:44:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:05.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:07.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:09 np0005592157 systemd-logind[785]: New session 46 of user zuul.
Jan 22 08:44:09 np0005592157 systemd[1]: Started Session 46 of User zuul.
Jan 22 08:44:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:09.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:10 np0005592157 python3.9[135464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:11.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:12 np0005592157 python3.9[135621]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:13 np0005592157 python3.9[135773]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:13 np0005592157 python3.9[135924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:14 np0005592157 python3.9[136076]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:44:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:17 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 08:44:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:17 np0005592157 python3.9[136233]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:44:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:18 np0005592157 python3.9[136318]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:44:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:21 np0005592157 python3.9[136472]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:44:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:22.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:22 np0005592157 python3[136628]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 08:44:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:22 np0005592157 python3.9[136780]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:23 np0005592157 python3.9[136933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:24.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:24 np0005592157 python3.9[137011]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:25 np0005592157 python3.9[137163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:25.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:25 np0005592157 python3.9[137242]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r_17c45_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:26.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:26 np0005592157 python3.9[137394]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:26 np0005592157 python3.9[137472]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:27.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:28 np0005592157 python3.9[137625]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:28.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:29 np0005592157 python3[137778]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:44:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:30.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:30 np0005592157 python3.9[137962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:30 np0005592157 python3.9[138107]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089469.4510593-431-52245216623946/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7883 writes, 32K keys, 7883 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7883 writes, 1478 syncs, 5.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7883 writes, 32K keys, 7883 commit groups, 1.0 writes per commit group, ingest: 20.81 MB, 0.03 MB/s#012Interval WAL: 7883 writes, 1478 syncs, 5.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000141 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000141 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 22 08:44:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:44:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:44:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:31 np0005592157 python3.9[138360]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:32.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:32 np0005592157 python3.9[138516]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089471.099085-476-251712022348858/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:44:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592157 python3.9[138668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:33 np0005592157 python3.9[138794]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089472.5161245-521-277296392715477/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9dd051e9-e006-467f-98d8-da089b4844a3 does not exist
Jan 22 08:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a74cfd39-0b9b-4ab8-aa69-2ac5b99b316d does not exist
Jan 22 08:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0ec9cb26-3ec1-4a0e-845c-e60eef3717f2 does not exist
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:44:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:34.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:34 np0005592157 python3.9[139046]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:44:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.649856496 +0000 UTC m=+0.051870219 container create 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:44:34 np0005592157 systemd[1]: Started libpod-conmon-1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930.scope.
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.626454739 +0000 UTC m=+0.028468472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.762147754 +0000 UTC m=+0.164161487 container init 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.773661588 +0000 UTC m=+0.175675291 container start 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.777112873 +0000 UTC m=+0.179126586 container attach 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 08:44:34 np0005592157 funny_cerf[139165]: 167 167
Jan 22 08:44:34 np0005592157 systemd[1]: libpod-1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930.scope: Deactivated successfully.
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.783802648 +0000 UTC m=+0.185816361 container died 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:44:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-41ef7be6a1103b32ae1e038bdafb8a5dda8d3a02c6f5d5b487a058e1e0e70d78-merged.mount: Deactivated successfully.
Jan 22 08:44:34 np0005592157 podman[139112]: 2026-01-22 13:44:34.829804302 +0000 UTC m=+0.231818035 container remove 1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:44:34 np0005592157 systemd[1]: libpod-conmon-1c9b14c2e8857fa864ed1796339c3a0a8b64d9c2f3c7905c15230376f4098930.scope: Deactivated successfully.
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:35.014436993 +0000 UTC m=+0.059698963 container create 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:44:35 np0005592157 systemd[1]: Started libpod-conmon-6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397.scope.
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:34.979632025 +0000 UTC m=+0.024894085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:35.109302171 +0000 UTC m=+0.154564151 container init 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:35.122431264 +0000 UTC m=+0.167693244 container start 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 08:44:35 np0005592157 python3.9[139250]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089473.927016-566-160824578128352/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:35.131891998 +0000 UTC m=+0.177153968 container attach 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:44:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:35 np0005592157 python3.9[139425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:35 np0005592157 tender_carver[139268]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:44:35 np0005592157 tender_carver[139268]: --> relative data size: 1.0
Jan 22 08:44:35 np0005592157 tender_carver[139268]: --> All data devices are unavailable
Jan 22 08:44:35 np0005592157 systemd[1]: libpod-6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397.scope: Deactivated successfully.
Jan 22 08:44:35 np0005592157 podman[139251]: 2026-01-22 13:44:35.963208748 +0000 UTC m=+1.008470718 container died 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:44:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-660ece45bdf534c14b1db91b10cda3dae10b6a5f9402bab0eb9906a89eb7fd98-merged.mount: Deactivated successfully.
Jan 22 08:44:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:36.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:36 np0005592157 podman[139251]: 2026-01-22 13:44:36.036284689 +0000 UTC m=+1.081546659 container remove 6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:44:36 np0005592157 systemd[1]: libpod-conmon-6df5d9c84f655cf9ca9de34498e7b397d08a3efff2c4ada3472718646bd52397.scope: Deactivated successfully.
Jan 22 08:44:36 np0005592157 python3.9[139671]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089475.3069098-611-24819875582240/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.661203022 +0000 UTC m=+0.047690907 container create d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:44:36 np0005592157 systemd[1]: Started libpod-conmon-d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999.scope.
Jan 22 08:44:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.63841216 +0000 UTC m=+0.024900105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.744747821 +0000 UTC m=+0.131235696 container init d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.751660711 +0000 UTC m=+0.138148606 container start d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 22 08:44:36 np0005592157 cool_wilson[139778]: 167 167
Jan 22 08:44:36 np0005592157 systemd[1]: libpod-d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999.scope: Deactivated successfully.
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.757512245 +0000 UTC m=+0.144000120 container attach d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.758151991 +0000 UTC m=+0.144639846 container died d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:44:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4d2fe1279a5e0500ee996ac0fad7f8cd0d48990c4daec7eeef58efeb16085d8c-merged.mount: Deactivated successfully.
Jan 22 08:44:36 np0005592157 podman[139739]: 2026-01-22 13:44:36.809905637 +0000 UTC m=+0.196393492 container remove d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:44:36 np0005592157 systemd[1]: libpod-conmon-d93c609b7cdbe65b5206867845be2ced20a0cf856c99e1f77fe53d76a2f54999.scope: Deactivated successfully.
Jan 22 08:44:36 np0005592157 podman[139869]: 2026-01-22 13:44:36.977056157 +0000 UTC m=+0.055938480 container create 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:44:37 np0005592157 systemd[1]: Started libpod-conmon-7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30.scope.
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:36.95160785 +0000 UTC m=+0.030490153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b57ee16cfea6734c0a1eaa07de24749af9e3ce33132859b9c7916ac85facbe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b57ee16cfea6734c0a1eaa07de24749af9e3ce33132859b9c7916ac85facbe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b57ee16cfea6734c0a1eaa07de24749af9e3ce33132859b9c7916ac85facbe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b57ee16cfea6734c0a1eaa07de24749af9e3ce33132859b9c7916ac85facbe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:37.065097627 +0000 UTC m=+0.143979920 container init 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:37.071531516 +0000 UTC m=+0.150413799 container start 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:37.077053262 +0000 UTC m=+0.155935625 container attach 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:44:37 np0005592157 python3.9[139926]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]: {
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:    "0": [
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:        {
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "devices": [
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "/dev/loop3"
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            ],
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "lv_name": "ceph_lv0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "lv_size": "7511998464",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "name": "ceph_lv0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "tags": {
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.cluster_name": "ceph",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.crush_device_class": "",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.encrypted": "0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.osd_id": "0",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.type": "block",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:                "ceph.vdo": "0"
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            },
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "type": "block",
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:            "vg_name": "ceph_vg0"
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:        }
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]:    ]
Jan 22 08:44:37 np0005592157 elegant_gagarin[139924]: }
Jan 22 08:44:37 np0005592157 systemd[1]: libpod-7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30.scope: Deactivated successfully.
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:37.866627853 +0000 UTC m=+0.945510156 container died 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:44:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5b57ee16cfea6734c0a1eaa07de24749af9e3ce33132859b9c7916ac85facbe9-merged.mount: Deactivated successfully.
Jan 22 08:44:37 np0005592157 podman[139869]: 2026-01-22 13:44:37.947078676 +0000 UTC m=+1.025960969 container remove 7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:44:37 np0005592157 systemd[1]: libpod-conmon-7489587c1b12f206651b2259bb3f6e07395d6d932cf7e7824a2a8d90f48a2a30.scope: Deactivated successfully.
Jan 22 08:44:38 np0005592157 python3.9[140086]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:38.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.639169505 +0000 UTC m=+0.047867331 container create 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:44:38 np0005592157 systemd[1]: Started libpod-conmon-45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df.scope.
Jan 22 08:44:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.617341587 +0000 UTC m=+0.026039453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.720017588 +0000 UTC m=+0.128715434 container init 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.727727608 +0000 UTC m=+0.136425434 container start 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:44:38 np0005592157 objective_jepsen[140381]: 167 167
Jan 22 08:44:38 np0005592157 systemd[1]: libpod-45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df.scope: Deactivated successfully.
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.733134181 +0000 UTC m=+0.141832027 container attach 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.733436449 +0000 UTC m=+0.142134285 container died 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:44:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d216a15749484ed1a3f1be369d8736f81a0c06f7290f8b78dbf85dd4c0fe3a2a-merged.mount: Deactivated successfully.
Jan 22 08:44:38 np0005592157 podman[140340]: 2026-01-22 13:44:38.793786246 +0000 UTC m=+0.202484072 container remove 45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:44:38 np0005592157 systemd[1]: libpod-conmon-45d83761b4feb52791048114a48392184e51166bc18c9ec0905c6079a77c53df.scope: Deactivated successfully.
Jan 22 08:44:38 np0005592157 python3.9[140424]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:38 np0005592157 podman[140432]: 2026-01-22 13:44:38.97286353 +0000 UTC m=+0.062562113 container create c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:44:39 np0005592157 systemd[1]: Started libpod-conmon-c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba.scope.
Jan 22 08:44:39 np0005592157 podman[140432]: 2026-01-22 13:44:38.952825456 +0000 UTC m=+0.042524059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:44:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:44:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8f029caaa9cfae2c94c6ebdcf0de7802c8b72ebe7d097c2912604b9965a9be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8f029caaa9cfae2c94c6ebdcf0de7802c8b72ebe7d097c2912604b9965a9be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8f029caaa9cfae2c94c6ebdcf0de7802c8b72ebe7d097c2912604b9965a9be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8f029caaa9cfae2c94c6ebdcf0de7802c8b72ebe7d097c2912604b9965a9be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:44:39 np0005592157 podman[140432]: 2026-01-22 13:44:39.055331773 +0000 UTC m=+0.145030386 container init c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:44:39 np0005592157 podman[140432]: 2026-01-22 13:44:39.064876798 +0000 UTC m=+0.154575381 container start c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:44:39 np0005592157 podman[140432]: 2026-01-22 13:44:39.068841086 +0000 UTC m=+0.158539669 container attach c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:44:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:39 np0005592157 elated_bohr[140452]: {
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:        "osd_id": 0,
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:        "type": "bluestore"
Jan 22 08:44:39 np0005592157 elated_bohr[140452]:    }
Jan 22 08:44:39 np0005592157 elated_bohr[140452]: }
Jan 22 08:44:39 np0005592157 systemd[1]: libpod-c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba.scope: Deactivated successfully.
Jan 22 08:44:39 np0005592157 podman[140432]: 2026-01-22 13:44:39.963518688 +0000 UTC m=+1.053217281 container died c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:44:39 np0005592157 python3.9[140610]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3f8f029caaa9cfae2c94c6ebdcf0de7802c8b72ebe7d097c2912604b9965a9be-merged.mount: Deactivated successfully.
Jan 22 08:44:40 np0005592157 podman[140432]: 2026-01-22 13:44:40.017292954 +0000 UTC m=+1.106991537 container remove c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:44:40 np0005592157 systemd[1]: libpod-conmon-c055f6bab34a386aff020def0b286b87f3a25273a4aafc4337f0dc1ba06188ba.scope: Deactivated successfully.
Jan 22 08:44:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:44:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:44:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 62ea9e60-1af2-4e42-ae7a-038d3a2d08b8 does not exist
Jan 22 08:44:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev db1bed07-83fe-4e3a-a5de-d9d478cb2b10 does not exist
Jan 22 08:44:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 59061e91-762d-4a66-9316-5b1891988769 does not exist
Jan 22 08:44:40 np0005592157 python3.9[140839]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:41 np0005592157 python3.9[140993]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:42.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:42 np0005592157 python3.9[141149]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:43 np0005592157 python3.9[141299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:44.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 08:44:45 np0005592157 python3.9[141453]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:45 np0005592157 ovs-vsctl[141454]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 08:44:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:44:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:44:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:46 np0005592157 python3.9[141607]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:44:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592157 python3.9[141762]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592157 ovs-vsctl[141763]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:44:47
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:44:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:47 np0005592157 python3.9[141914]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:48.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:48 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:48 np0005592157 python3.9[142068]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:49 np0005592157 python3.9[142221]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:49 np0005592157 python3.9[142299]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:50 np0005592157 python3.9[142501]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:51 np0005592157 python3.9[142579]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:52 np0005592157 python3.9[142732]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:52 np0005592157 python3.9[142884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:53 np0005592157 python3.9[142962]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:54 np0005592157 python3.9[143115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:54 np0005592157 python3.9[143193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:55.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:55 np0005592157 python3.9[143346]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:44:55 np0005592157 systemd[1]: Reloading.
Jan 22 08:44:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:55 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:44:55 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:44:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:56.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:57 np0005592157 python3.9[143537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:58.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:58 np0005592157 python3.9[143615]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:59 np0005592157 python3.9[143767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:44:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:44:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:44:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:44:59 np0005592157 python3.9[143846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:00.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:00 np0005592157 python3.9[143998]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:00 np0005592157 systemd[1]: Reloading.
Jan 22 08:45:00 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:00 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:01 np0005592157 systemd[1]: Starting Create netns directory...
Jan 22 08:45:01 np0005592157 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:45:01 np0005592157 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:45:01 np0005592157 systemd[1]: Finished Create netns directory.
Jan 22 08:45:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:01.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:02 np0005592157 python3.9[144192]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:03 np0005592157 python3.9[144344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:45:03 np0005592157 python3.9[144468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089502.4618268-1364-110047276316002/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:04 np0005592157 python3.9[144620]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:05 np0005592157 python3.9[144773]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:06 np0005592157 python3.9[144925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:07 np0005592157 python3.9[145048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089505.9373987-1463-189374062958564/.source.json _original_basename=.a2q7o6si follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.625376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507625604, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2131, "num_deletes": 251, "total_data_size": 3179382, "memory_usage": 3252440, "flush_reason": "Manual Compaction"}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507650628, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3097044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10798, "largest_seqno": 12928, "table_properties": {"data_size": 3088227, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21978, "raw_average_key_size": 20, "raw_value_size": 3068799, "raw_average_value_size": 2919, "num_data_blocks": 220, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089343, "oldest_key_time": 1769089343, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 25258 microseconds, and 14816 cpu microseconds.
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650747) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3097044 bytes OK
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650780) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.652784) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.652812) EVENT_LOG_v1 {"time_micros": 1769089507652807, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.652833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3170394, prev total WAL file size 3170394, number of live WAL files 2.
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.654227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3024KB)], [26(7706KB)]
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507654435, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10988861, "oldest_snapshot_seqno": -1}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4555 keys, 8311460 bytes, temperature: kUnknown
Jan 22 08:45:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507723768, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8311460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8279698, "index_size": 19300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 112507, "raw_average_key_size": 24, "raw_value_size": 8195756, "raw_average_value_size": 1799, "num_data_blocks": 819, "num_entries": 4555, "num_filter_entries": 4555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.724223) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8311460 bytes
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 5074, records dropped: 519 output_compression: NoCompression
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725771) EVENT_LOG_v1 {"time_micros": 1769089507725756, "job": 10, "event": "compaction_finished", "compaction_time_micros": 69444, "compaction_time_cpu_micros": 39708, "output_level": 6, "num_output_files": 1, "total_output_size": 8311460, "num_input_records": 5074, "num_output_records": 4555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507726918, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507729668, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.654001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.729795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.729801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.729803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.729805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:45:07.729806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:07 np0005592157 python3.9[145199]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:08.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:09.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:45:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:45:10 np0005592157 python3.9[145673]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 08:45:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:11.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:12.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:12 np0005592157 python3.9[145826]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:45:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:13.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:14.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:14 np0005592157 python3[145979]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:45:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:15.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:18.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:19 np0005592157 podman[145992]: 2026-01-22 13:45:19.245459047 +0000 UTC m=+4.988709776 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:19 np0005592157 podman[146114]: 2026-01-22 13:45:19.460622849 +0000 UTC m=+0.078801679 container create 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:45:19 np0005592157 podman[146114]: 2026-01-22 13:45:19.421759993 +0000 UTC m=+0.039938883 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592157 python3[145979]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:20.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:21 np0005592157 python3.9[146305]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:22.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:23 np0005592157 python3.9[146460]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:23 np0005592157 python3.9[146537]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:24.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:24 np0005592157 python3.9[146688]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089523.7694476-1697-31899633586946/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:25 np0005592157 python3.9[146764]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:45:25 np0005592157 systemd[1]: Reloading.
Jan 22 08:45:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:25 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:25 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:26.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:27 np0005592157 python3.9[146879]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:45:27 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:27 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:27.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:27 np0005592157 systemd[1]: Starting ovn_controller container...
Jan 22 08:45:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8b6a8ebd0c6dc0919acf5ed589b776e4909194ed693b23da2df03df4ae2e44/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:27 np0005592157 systemd[1]: Started /usr/bin/podman healthcheck run 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786.
Jan 22 08:45:27 np0005592157 podman[146924]: 2026-01-22 13:45:27.762552143 +0000 UTC m=+0.185720664 container init 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 08:45:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:27 np0005592157 ovn_controller[146940]: + sudo -E kolla_set_configs
Jan 22 08:45:27 np0005592157 podman[146924]: 2026-01-22 13:45:27.806459741 +0000 UTC m=+0.229628222 container start 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:45:27 np0005592157 edpm-start-podman-container[146924]: ovn_controller
Jan 22 08:45:27 np0005592157 systemd[1]: Created slice User Slice of UID 0.
Jan 22 08:45:27 np0005592157 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 08:45:27 np0005592157 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 08:45:27 np0005592157 systemd[1]: Starting User Manager for UID 0...
Jan 22 08:45:27 np0005592157 edpm-start-podman-container[146923]: Creating additional drop-in dependency for "ovn_controller" (8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786)
Jan 22 08:45:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:45:27 np0005592157 podman[146947]: 2026-01-22 13:45:27.957003627 +0000 UTC m=+0.132492382 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 08:45:28 np0005592157 systemd[146970]: Queued start job for default target Main User Target.
Jan 22 08:45:28 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:28 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:28 np0005592157 systemd[146970]: Created slice User Application Slice.
Jan 22 08:45:28 np0005592157 systemd[146970]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 08:45:28 np0005592157 systemd[146970]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:28 np0005592157 systemd[146970]: Reached target Paths.
Jan 22 08:45:28 np0005592157 systemd[146970]: Reached target Timers.
Jan 22 08:45:28 np0005592157 systemd[146970]: Starting D-Bus User Message Bus Socket...
Jan 22 08:45:28 np0005592157 systemd[146970]: Starting Create User's Volatile Files and Directories...
Jan 22 08:45:28 np0005592157 systemd[146970]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:45:28 np0005592157 systemd[146970]: Finished Create User's Volatile Files and Directories.
Jan 22 08:45:28 np0005592157 systemd[146970]: Reached target Sockets.
Jan 22 08:45:28 np0005592157 systemd[146970]: Reached target Basic System.
Jan 22 08:45:28 np0005592157 systemd[146970]: Reached target Main User Target.
Jan 22 08:45:28 np0005592157 systemd[146970]: Startup finished in 159ms.
Jan 22 08:45:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:28.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:28 np0005592157 systemd[1]: Started User Manager for UID 0.
Jan 22 08:45:28 np0005592157 systemd[1]: Started ovn_controller container.
Jan 22 08:45:28 np0005592157 systemd[1]: 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786-69ff8c2c253ffb52.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 08:45:28 np0005592157 systemd[1]: 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786-69ff8c2c253ffb52.service: Failed with result 'exit-code'.
Jan 22 08:45:28 np0005592157 systemd[1]: Started Session c1 of User root.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: INFO:__main__:Validating config file
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: INFO:__main__:Writing out command to execute
Jan 22 08:45:28 np0005592157 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: ++ cat /run_command
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + ARGS=
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + sudo kolla_copy_cacerts
Jan 22 08:45:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:28 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:28 np0005592157 systemd[1]: Started Session c2 of User root.
Jan 22 08:45:28 np0005592157 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + [[ ! -n '' ]]
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + . kolla_extend_start
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + umask 0022
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5101] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5111] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <warn>  [1769089528.5114] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5125] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5133] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5139] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 08:45:28 np0005592157 kernel: br-int: entered promiscuous mode
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5405] manager: (ovn-d9fd1e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5413] manager: (ovn-c803af-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5419] manager: (ovn-c4fa18-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 08:45:28 np0005592157 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5734] device (genev_sys_6081): carrier: link connected
Jan 22 08:45:28 np0005592157 NetworkManager[48997]: <info>  [1769089528.5737] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 08:45:28 np0005592157 systemd-udevd[147077]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:45:28 np0005592157 systemd-udevd[147078]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:45:29 np0005592157 python3.9[147207]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:45:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:29.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:30.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:30 np0005592157 python3.9[147410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:31 np0005592157 python3.9[147533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089530.0780864-1832-215194222920424/.source.yaml _original_basename=.l146zw2_ follow=False checksum=46f66c8a157c96fcb7cc69848fe925e114c66b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:31.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:32 np0005592157 python3.9[147686]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:32 np0005592157 ovs-vsctl[147687]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 08:45:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:33 np0005592157 python3.9[147839]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:33 np0005592157 ovs-vsctl[147841]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 08:45:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:34 np0005592157 python3.9[147995]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:34 np0005592157 ovs-vsctl[147996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 08:45:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:34 np0005592157 systemd[1]: session-46.scope: Deactivated successfully.
Jan 22 08:45:34 np0005592157 systemd[1]: session-46.scope: Consumed 1min 5.076s CPU time.
Jan 22 08:45:34 np0005592157 systemd-logind[785]: Session 46 logged out. Waiting for processes to exit.
Jan 22 08:45:34 np0005592157 systemd-logind[785]: Removed session 46.
Jan 22 08:45:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:35.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:37.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:38.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:38 np0005592157 systemd[1]: Stopping User Manager for UID 0...
Jan 22 08:45:38 np0005592157 systemd[146970]: Activating special unit Exit the Session...
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped target Main User Target.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped target Basic System.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped target Paths.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped target Sockets.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped target Timers.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:38 np0005592157 systemd[146970]: Closed D-Bus User Message Bus Socket.
Jan 22 08:45:38 np0005592157 systemd[146970]: Stopped Create User's Volatile Files and Directories.
Jan 22 08:45:38 np0005592157 systemd[146970]: Removed slice User Application Slice.
Jan 22 08:45:38 np0005592157 systemd[146970]: Reached target Shutdown.
Jan 22 08:45:38 np0005592157 systemd[146970]: Finished Exit the Session.
Jan 22 08:45:38 np0005592157 systemd[146970]: Reached target Exit the Session.
Jan 22 08:45:38 np0005592157 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 08:45:38 np0005592157 systemd[1]: Stopped User Manager for UID 0.
Jan 22 08:45:38 np0005592157 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 08:45:38 np0005592157 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 08:45:38 np0005592157 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 08:45:38 np0005592157 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 08:45:38 np0005592157 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 08:45:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:40 np0005592157 systemd-logind[785]: New session 48 of user zuul.
Jan 22 08:45:40 np0005592157 systemd[1]: Started Session 48 of User zuul.
Jan 22 08:45:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:41 np0005592157 python3.9[148290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:41.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:42.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:42 np0005592157 python3.9[148472]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:43.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:43 np0005592157 python3.9[148625]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 08:45:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:44.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:44 np0005592157 python3.9[148777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:45:45 np0005592157 python3.9[148929]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:45.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ae4cffdf-a613-4379-9ed1-11e8215c4819 does not exist
Jan 22 08:45:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c6875472-fc6d-4dfc-a304-29dc30207e35 does not exist
Jan 22 08:45:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f75b2d5f-a32b-4498-bb26-4c964a336888 does not exist
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:45:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:46 np0005592157 python3.9[149182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.044606294 +0000 UTC m=+0.052209729 container create 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:45:46 np0005592157 systemd[1]: Started libpod-conmon-8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331.scope.
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.016460536 +0000 UTC m=+0.024064051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:46.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.134712014 +0000 UTC m=+0.142315469 container init 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.143359492 +0000 UTC m=+0.150962927 container start 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.146813286 +0000 UTC m=+0.154416751 container attach 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:45:46 np0005592157 zealous_joliot[149261]: 167 167
Jan 22 08:45:46 np0005592157 systemd[1]: libpod-8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331.scope: Deactivated successfully.
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.153837395 +0000 UTC m=+0.161440840 container died 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:45:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4d9fbddfdfddc8b4e1655a1cdd6ffde2c5407ec85647df9326daf6d550cc7e59-merged.mount: Deactivated successfully.
Jan 22 08:45:46 np0005592157 podman[149223]: 2026-01-22 13:45:46.195079658 +0000 UTC m=+0.202683093 container remove 8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:45:46 np0005592157 systemd[1]: libpod-conmon-8269995ad0a14709a40c4fec22921fe3cae0ddf39d790b55094f85f36cde1331.scope: Deactivated successfully.
Jan 22 08:45:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:45:46 np0005592157 podman[149339]: 2026-01-22 13:45:46.386917519 +0000 UTC m=+0.052266170 container create 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:45:46 np0005592157 systemd[1]: Started libpod-conmon-858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548.scope.
Jan 22 08:45:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:46 np0005592157 podman[149339]: 2026-01-22 13:45:46.465528232 +0000 UTC m=+0.130876903 container init 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 08:45:46 np0005592157 podman[149339]: 2026-01-22 13:45:46.371288142 +0000 UTC m=+0.036636793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:46 np0005592157 podman[149339]: 2026-01-22 13:45:46.480563604 +0000 UTC m=+0.145912285 container start 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:45:46 np0005592157 podman[149339]: 2026-01-22 13:45:46.485419721 +0000 UTC m=+0.150768412 container attach 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:45:46 np0005592157 python3.9[149435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:47 np0005592157 elated_williamson[149379]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:45:47 np0005592157 elated_williamson[149379]: --> relative data size: 1.0
Jan 22 08:45:47 np0005592157 elated_williamson[149379]: --> All data devices are unavailable
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:45:47
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592157 systemd[1]: libpod-858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548.scope: Deactivated successfully.
Jan 22 08:45:47 np0005592157 podman[149522]: 2026-01-22 13:45:47.292363607 +0000 UTC m=+0.022759949 container died 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:45:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5015309562bfbb97f0ad2ab0e605c8e5fb9ac3c72165356903d53dc5cc73e17b-merged.mount: Deactivated successfully.
Jan 22 08:45:47 np0005592157 podman[149522]: 2026-01-22 13:45:47.345063506 +0000 UTC m=+0.075459818 container remove 858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:45:47 np0005592157 systemd[1]: libpod-conmon-858e250ee0aed8d3b0271f35dcc537cde5d865409cdc710faddaa4bc9aed1548.scope: Deactivated successfully.
Jan 22 08:45:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:47.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:47 np0005592157 python3.9[149700]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.039796309 +0000 UTC m=+0.049535264 container create dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 08:45:48 np0005592157 systemd[1]: Started libpod-conmon-dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f.scope.
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.015995766 +0000 UTC m=+0.025734741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.138492126 +0000 UTC m=+0.148231101 container init dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.146201281 +0000 UTC m=+0.155940246 container start dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:45:48 np0005592157 intelligent_turing[149769]: 167 167
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.150232308 +0000 UTC m=+0.159971353 container attach dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:45:48 np0005592157 systemd[1]: libpod-dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f.scope: Deactivated successfully.
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.151336785 +0000 UTC m=+0.161075750 container died dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 08:45:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cbe7c6d845ccc7723db9e7e99fa897501fdb87d86a9c3eb40fc1e943b4d604da-merged.mount: Deactivated successfully.
Jan 22 08:45:48 np0005592157 podman[149752]: 2026-01-22 13:45:48.186351188 +0000 UTC m=+0.196090143 container remove dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:45:48 np0005592157 systemd[1]: libpod-conmon-dad26ca7c312259949f937cd6a54ca4902ae84f2f0d6faa8e394be007d88241f.scope: Deactivated successfully.
Jan 22 08:45:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:48 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:48 np0005592157 podman[149792]: 2026-01-22 13:45:48.370727519 +0000 UTC m=+0.048873168 container create 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:45:48 np0005592157 systemd[1]: Started libpod-conmon-4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1.scope.
Jan 22 08:45:48 np0005592157 podman[149792]: 2026-01-22 13:45:48.348265488 +0000 UTC m=+0.026411137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f85d7e0483b5c660d28a572665f84e39e3151c1896e5c3caceb8e3de86bad52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f85d7e0483b5c660d28a572665f84e39e3151c1896e5c3caceb8e3de86bad52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f85d7e0483b5c660d28a572665f84e39e3151c1896e5c3caceb8e3de86bad52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f85d7e0483b5c660d28a572665f84e39e3151c1896e5c3caceb8e3de86bad52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:48 np0005592157 podman[149792]: 2026-01-22 13:45:48.508390985 +0000 UTC m=+0.186536624 container init 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 08:45:48 np0005592157 podman[149792]: 2026-01-22 13:45:48.514638255 +0000 UTC m=+0.192783874 container start 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 08:45:48 np0005592157 podman[149792]: 2026-01-22 13:45:48.519613765 +0000 UTC m=+0.197759434 container attach 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]: {
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:    "0": [
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:        {
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "devices": [
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "/dev/loop3"
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            ],
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "lv_name": "ceph_lv0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "lv_size": "7511998464",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "name": "ceph_lv0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "tags": {
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.cluster_name": "ceph",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.crush_device_class": "",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.encrypted": "0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.osd_id": "0",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.type": "block",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:                "ceph.vdo": "0"
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            },
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "type": "block",
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:            "vg_name": "ceph_vg0"
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:        }
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]:    ]
Jan 22 08:45:49 np0005592157 compassionate_cerf[149808]: }
Jan 22 08:45:49 np0005592157 systemd[1]: libpod-4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1.scope: Deactivated successfully.
Jan 22 08:45:49 np0005592157 podman[149792]: 2026-01-22 13:45:49.312101483 +0000 UTC m=+0.990247142 container died 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:45:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4f85d7e0483b5c660d28a572665f84e39e3151c1896e5c3caceb8e3de86bad52-merged.mount: Deactivated successfully.
Jan 22 08:45:49 np0005592157 python3.9[149964]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:49 np0005592157 podman[149792]: 2026-01-22 13:45:49.377857796 +0000 UTC m=+1.056003435 container remove 4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:45:49 np0005592157 systemd[1]: libpod-conmon-4b646acf340bd3af09f07d209d32e31a5f7592256ec66514e00592f8e13cf4f1.scope: Deactivated successfully.
Jan 22 08:45:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:49.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.005289229 +0000 UTC m=+0.044115784 container create 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:45:50 np0005592157 systemd[1]: Started libpod-conmon-53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1.scope.
Jan 22 08:45:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.078351388 +0000 UTC m=+0.117177973 container init 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:49.98334089 +0000 UTC m=+0.022167465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.085014189 +0000 UTC m=+0.123840724 container start 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.08880409 +0000 UTC m=+0.127630635 container attach 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:45:50 np0005592157 dazzling_dijkstra[150259]: 167 167
Jan 22 08:45:50 np0005592157 systemd[1]: libpod-53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1.scope: Deactivated successfully.
Jan 22 08:45:50 np0005592157 conmon[150259]: conmon 53fc204a7b4437d09b38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1.scope/container/memory.events
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.090949812 +0000 UTC m=+0.129776357 container died 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:45:50 np0005592157 python3.9[150238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089548.7316487-219-137128749375459/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5924758e0860d954555469ddc2c0eb92345751dcaf82a0cd5c781eb72d4ae3d1-merged.mount: Deactivated successfully.
Jan 22 08:45:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:50.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:50 np0005592157 podman[150242]: 2026-01-22 13:45:50.132559974 +0000 UTC m=+0.171386519 container remove 53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:45:50 np0005592157 systemd[1]: libpod-conmon-53fc204a7b4437d09b387c93b40412b9ade3676074db16dc20b9cf525293bea1.scope: Deactivated successfully.
Jan 22 08:45:50 np0005592157 podman[150307]: 2026-01-22 13:45:50.303076155 +0000 UTC m=+0.042306644 container create ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:45:50 np0005592157 systemd[1]: Started libpod-conmon-ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e.scope.
Jan 22 08:45:50 np0005592157 podman[150307]: 2026-01-22 13:45:50.28549867 +0000 UTC m=+0.024729159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:45:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:45:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ff2256ec08a69459f4ac09414a1c89221367e5a81f36085a91da0de0083ed0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ff2256ec08a69459f4ac09414a1c89221367e5a81f36085a91da0de0083ed0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ff2256ec08a69459f4ac09414a1c89221367e5a81f36085a91da0de0083ed0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ff2256ec08a69459f4ac09414a1c89221367e5a81f36085a91da0de0083ed0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:50 np0005592157 podman[150307]: 2026-01-22 13:45:50.406852596 +0000 UTC m=+0.146083095 container init ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:45:50 np0005592157 podman[150307]: 2026-01-22 13:45:50.413886276 +0000 UTC m=+0.153116755 container start ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:45:50 np0005592157 podman[150307]: 2026-01-22 13:45:50.418348014 +0000 UTC m=+0.157578523 container attach ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:45:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:51 np0005592157 python3.9[150504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]: {
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:        "osd_id": 0,
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:        "type": "bluestore"
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]:    }
Jan 22 08:45:51 np0005592157 jolly_bhaskara[150324]: }
Jan 22 08:45:51 np0005592157 systemd[1]: libpod-ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e.scope: Deactivated successfully.
Jan 22 08:45:51 np0005592157 podman[150307]: 2026-01-22 13:45:51.323807701 +0000 UTC m=+1.063038240 container died ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:45:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-27ff2256ec08a69459f4ac09414a1c89221367e5a81f36085a91da0de0083ed0-merged.mount: Deactivated successfully.
Jan 22 08:45:51 np0005592157 podman[150307]: 2026-01-22 13:45:51.391305464 +0000 UTC m=+1.130535943 container remove ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:45:51 np0005592157 systemd[1]: libpod-conmon-ac53245d0351d35103bbd808151f5d569fb7e51464e64289462e0cbb8d6c364e.scope: Deactivated successfully.
Jan 22 08:45:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:45:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 08:45:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:51.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 08:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:45:51 np0005592157 python3.9[150656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089550.555241-264-117098489951583/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:52.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 101c6536-fb85-4141-a7db-b520990dc2ab does not exist
Jan 22 08:45:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8dc3c158-d79f-4595-a2af-01b01cb59519 does not exist
Jan 22 08:45:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cf4a8ec3-e036-4c3e-ab2d-d84e5410a976 does not exist
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592157 python3.9[150808]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:53 np0005592157 python3.9[150943]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:45:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:54.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:45:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:45:56 np0005592157 python3.9[151097]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:45:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592157 python3.9[151250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:57.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:57 np0005592157 python3.9[151372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089556.62247-375-243891734877182/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:58 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:58Z|00025|memory|INFO|16256 kB peak resident set size after 29.9 seconds
Jan 22 08:45:58 np0005592157 ovn_controller[146940]: 2026-01-22T13:45:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 08:45:58 np0005592157 podman[151449]: 2026-01-22 13:45:58.439901574 +0000 UTC m=+0.173891859 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:45:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:59 np0005592157 python3.9[151549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:45:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:59.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:45:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:59 np0005592157 python3.9[151671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089557.9684386-375-3299334958275/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:00.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:01 np0005592157 python3.9[151821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:01.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:02 np0005592157 python3.9[151943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089560.8914106-507-13077694387902/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:02.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:02 np0005592157 python3.9[152093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:03 np0005592157 python3.9[152214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089562.2726872-507-216769933344560/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:03.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:46:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:04.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:04 np0005592157 python3.9[152365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:05 np0005592157 python3.9[152519]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:05.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:06.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:06 np0005592157 python3.9[152672]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:06 np0005592157 python3.9[152750]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592157 python3.9[152902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:07 np0005592157 python3.9[152981]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:08.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:08 np0005592157 python3.9[153133]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:09 np0005592157 python3.9[153286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:10.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:10 np0005592157 python3.9[153364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:11 np0005592157 python3.9[153566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:11 np0005592157 python3.9[153645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592157 python3.9[153797]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:13 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:13 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:13 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:14.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:14 np0005592157 python3.9[153987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:14 np0005592157 python3.9[154065]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:15.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:15 np0005592157 python3.9[154218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:16 np0005592157 python3.9[154296]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:17 np0005592157 python3.9[154448]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:17 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:17 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:17 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:17.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:17 np0005592157 systemd[1]: Starting Create netns directory...
Jan 22 08:46:17 np0005592157 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:46:17 np0005592157 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:46:17 np0005592157 systemd[1]: Finished Create netns directory.
Jan 22 08:46:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:18.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:18 np0005592157 python3.9[154642]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:19 np0005592157 python3.9[154794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:19.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:19 np0005592157 python3.9[154918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089578.82165-960-133353195367503/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:21 np0005592157 python3.9[155070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:21.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:21 np0005592157 python3.9[155223]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:22 np0005592157 python3.9[155375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:23 np0005592157 python3.9[155498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089582.172813-1059-72506709838474/.source.json _original_basename=.90m3rmj6 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:24 np0005592157 python3.9[155649]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:25.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:26 np0005592157 python3.9[156073]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 08:46:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:27.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:27 np0005592157 python3.9[156226]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:46:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:28 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:28 np0005592157 podman[156350]: 2026-01-22 13:46:28.915781329 +0000 UTC m=+0.125289842 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 08:46:29 np0005592157 python3[156401]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:46:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:30.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:31.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 22 08:46:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 22 08:46:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:37.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:38 np0005592157 podman[156420]: 2026-01-22 13:46:38.294446621 +0000 UTC m=+8.930356106 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:38 np0005592157 podman[156604]: 2026-01-22 13:46:38.487358458 +0000 UTC m=+0.068701703 container create 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 08:46:38 np0005592157 podman[156604]: 2026-01-22 13:46:38.455353624 +0000 UTC m=+0.036696879 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:38 np0005592157 python3[156401]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:39.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:39 np0005592157 python3.9[156794]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:41 np0005592157 python3.9[156948]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:41.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:41 np0005592157 python3.9[157025]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:42 np0005592157 python3.9[157176]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089601.7069693-1293-157279624657459/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:42 np0005592157 python3.9[157252]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:42 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:43 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:43 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:43.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:43 np0005592157 python3.9[157363]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:45 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:45 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:45 np0005592157 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 08:46:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f69ca9ebb07a015743a02e5a35eb3c719a22e1b441ddbeb261de81195b10687/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f69ca9ebb07a015743a02e5a35eb3c719a22e1b441ddbeb261de81195b10687/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592157 systemd[1]: Started /usr/bin/podman healthcheck run 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a.
Jan 22 08:46:45 np0005592157 podman[157404]: 2026-01-22 13:46:45.467674536 +0000 UTC m=+0.144235170 container init 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + sudo -E kolla_set_configs
Jan 22 08:46:45 np0005592157 podman[157404]: 2026-01-22 13:46:45.507410567 +0000 UTC m=+0.183971201 container start 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:46:45 np0005592157 edpm-start-podman-container[157404]: ovn_metadata_agent
Jan 22 08:46:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:45.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592157 edpm-start-podman-container[157403]: Creating additional drop-in dependency for "ovn_metadata_agent" (48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a)
Jan 22 08:46:45 np0005592157 podman[157428]: 2026-01-22 13:46:45.577055062 +0000 UTC m=+0.056401875 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 08:46:45 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Validating config file
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Copying service configuration files
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Writing out command to execute
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: ++ cat /run_command
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + CMD=neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + ARGS=
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + sudo kolla_copy_cacerts
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + [[ ! -n '' ]]
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + . kolla_extend_start
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + umask 0022
Jan 22 08:46:45 np0005592157 ovn_metadata_agent[157421]: + exec neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:45 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:45 np0005592157 systemd[1]: Started ovn_metadata_agent container.
Jan 22 08:46:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:46.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:46:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:46 np0005592157 python3.9[157656]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:46:47
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'backups', 'default.rgw.meta', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.515 157426 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.515 157426 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.515 157426 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.516 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.516 157426 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.516 157426 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.517 157426 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.518 157426 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.519 157426 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.520 157426 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.521 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.522 157426 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.523 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.524 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.525 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.526 157426 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.527 157426 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.528 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.529 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.530 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.531 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.532 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.533 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.534 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.535 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.536 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.537 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.538 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.539 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.540 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.541 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.542 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.543 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:47.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.544 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.545 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.546 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.547 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.548 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.549 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.550 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.550 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.550 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.550 157426 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.550 157426 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.559 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.559 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.560 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.560 157426 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.560 157426 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.572 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7335e41f-b1b8-4c04-9c19-8788162d5bb4 (UUID: 7335e41f-b1b8-4c04-9c19-8788162d5bb4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.613 157426 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.613 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.614 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.614 157426 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.619 157426 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.629 157426 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.638 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7335e41f-b1b8-4c04-9c19-8788162d5bb4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], external_ids={}, name=7335e41f-b1b8-4c04-9c19-8788162d5bb4, nb_cfg_timestamp=1769089536532, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.639 157426 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f4af1895f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.641 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.641 157426 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.641 157426 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.642 157426 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.649 157426 DEBUG oslo_service.service [-] Started child 157734 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.656 157734 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-233692'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.656 157426 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpy9wrd1ci/privsep.sock']#033[00m
Jan 22 08:46:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.695 157734 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.696 157734 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.697 157734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.702 157734 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.709 157734 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:47.716 157734 INFO eventlet.wsgi.server [-] (157734) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 22 08:46:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:48 np0005592157 python3.9[157814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:48 np0005592157 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.360 157426 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.360 157426 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy9wrd1ci/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.221 157842 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.231 157842 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.233 157842 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.234 157842 INFO oslo.privsep.daemon [-] privsep daemon running as pid 157842#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.363 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[089e950e-79d6-4a81-aa2b-b1d62aee2baa]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:48 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:48 np0005592157 python3.9[157943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089607.5484655-1428-229418445636319/.source.yaml _original_basename=.ntd1ymup follow=False checksum=a7c93daf1344287e5303b3d1648c714a9349cb4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.930 157842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.930 157842 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:46:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:48.930 157842 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:46:49 np0005592157 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 08:46:49 np0005592157 systemd[1]: session-48.scope: Consumed 1min 1.193s CPU time.
Jan 22 08:46:49 np0005592157 systemd-logind[785]: Session 48 logged out. Waiting for processes to exit.
Jan 22 08:46:49 np0005592157 systemd-logind[785]: Removed session 48.
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.457 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[f27093b3-82db-4e08-83f1-876efa264bea]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.459 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, column=external_ids, values=({'neutron:ovn-metadata-id': 'a036d6c7-c598-5cf6-8fd0-aed9d51beebc'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.468 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.475 157426 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.476 157426 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.477 157426 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.478 157426 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.479 157426 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.480 157426 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.480 157426 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.480 157426 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.480 157426 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.480 157426 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.481 157426 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.482 157426 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.483 157426 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.484 157426 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.485 157426 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.486 157426 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.487 157426 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.488 157426 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.489 157426 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.490 157426 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.491 157426 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.492 157426 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.493 157426 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.494 157426 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.495 157426 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.496 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.497 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.498 157426 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.499 157426 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.500 157426 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.501 157426 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.502 157426 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.503 157426 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.504 157426 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.505 157426 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.506 157426 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.507 157426 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.508 157426 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.509 157426 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.510 157426 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.511 157426 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.512 157426 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.513 157426 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.514 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.515 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.515 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.515 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.515 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.515 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.516 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.517 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.518 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.519 157426 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:46:49.520 157426 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:49.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:51.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:53.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:46:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bd8d3d32-90bd-4f4e-8ff1-e968b698271a does not exist
Jan 22 08:46:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8044017b-f900-439b-bbea-52e271a57576 does not exist
Jan 22 08:46:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0c522fd3-fcc9-4cac-99ee-793c2d272636 does not exist
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:46:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:46:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:46:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:46:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:46:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:54.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:54 np0005592157 systemd-logind[785]: New session 49 of user zuul.
Jan 22 08:46:54 np0005592157 systemd[1]: Started Session 49 of User zuul.
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.660270506 +0000 UTC m=+0.061499159 container create 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:46:54 np0005592157 systemd[1]: Started libpod-conmon-1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d.scope.
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.627558355 +0000 UTC m=+0.028787098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.764078338 +0000 UTC m=+0.165307011 container init 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.772792769 +0000 UTC m=+0.174021422 container start 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.775993656 +0000 UTC m=+0.177222309 container attach 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:46:54 np0005592157 reverent_nash[158321]: 167 167
Jan 22 08:46:54 np0005592157 systemd[1]: libpod-1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d.scope: Deactivated successfully.
Jan 22 08:46:54 np0005592157 conmon[158321]: conmon 1ad28f93c2b632247ce4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d.scope/container/memory.events
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.78276631 +0000 UTC m=+0.183994983 container died 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:46:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d11d565b295f9ee2d79c3101fe5eff948db0e918770bdaa910100e4722603ab8-merged.mount: Deactivated successfully.
Jan 22 08:46:54 np0005592157 podman[158293]: 2026-01-22 13:46:54.825394061 +0000 UTC m=+0.226622714 container remove 1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:46:54 np0005592157 systemd[1]: libpod-conmon-1ad28f93c2b632247ce4f38c671c626bfab244d91420b0307ddac51b9eeda77d.scope: Deactivated successfully.
Jan 22 08:46:55 np0005592157 podman[158386]: 2026-01-22 13:46:55.010606062 +0000 UTC m=+0.058644129 container create d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:46:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:55 np0005592157 systemd[1]: Started libpod-conmon-d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d.scope.
Jan 22 08:46:55 np0005592157 podman[158386]: 2026-01-22 13:46:54.98860838 +0000 UTC m=+0.036646437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:55 np0005592157 podman[158386]: 2026-01-22 13:46:55.12085427 +0000 UTC m=+0.168892387 container init d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:46:55 np0005592157 podman[158386]: 2026-01-22 13:46:55.127166132 +0000 UTC m=+0.175204209 container start d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:46:55 np0005592157 podman[158386]: 2026-01-22 13:46:55.130524494 +0000 UTC m=+0.178562561 container attach d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:46:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:55 np0005592157 python3.9[158505]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:46:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:55 np0005592157 inspiring_mccarthy[158402]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:46:55 np0005592157 inspiring_mccarthy[158402]: --> relative data size: 1.0
Jan 22 08:46:55 np0005592157 inspiring_mccarthy[158402]: --> All data devices are unavailable
Jan 22 08:46:56 np0005592157 systemd[1]: libpod-d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d.scope: Deactivated successfully.
Jan 22 08:46:56 np0005592157 podman[158386]: 2026-01-22 13:46:56.013118294 +0000 UTC m=+1.061156351 container died d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:46:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4c2bf9612eafc5648cd50afef318819444ee919fe4b17b0b5f65020b17d856b2-merged.mount: Deactivated successfully.
Jan 22 08:46:56 np0005592157 podman[158386]: 2026-01-22 13:46:56.068721401 +0000 UTC m=+1.116759428 container remove d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 08:46:56 np0005592157 systemd[1]: libpod-conmon-d2d31fbea38caac3be9dda839286f640eebd85d131660c4d165e29921724276d.scope: Deactivated successfully.
Jan 22 08:46:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:46:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.695335265 +0000 UTC m=+0.042549213 container create afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:46:56 np0005592157 systemd[1]: Started libpod-conmon-afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96.scope.
Jan 22 08:46:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.672501955 +0000 UTC m=+0.019715913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.786073069 +0000 UTC m=+0.133287087 container init afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.797015252 +0000 UTC m=+0.144229200 container start afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:46:56 np0005592157 loving_cray[158841]: 167 167
Jan 22 08:46:56 np0005592157 systemd[1]: libpod-afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96.scope: Deactivated successfully.
Jan 22 08:46:56 np0005592157 conmon[158841]: conmon afcec541f3a5ce3c8157 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96.scope/container/memory.events
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.800774026 +0000 UTC m=+0.147987994 container attach afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.805145355 +0000 UTC m=+0.152359323 container died afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:46:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6fb47c06342b7bd14eed1855523dab8174d360b696990e0b21e207a5fa0eea80-merged.mount: Deactivated successfully.
Jan 22 08:46:56 np0005592157 podman[158797]: 2026-01-22 13:46:56.846461836 +0000 UTC m=+0.193675764 container remove afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 08:46:56 np0005592157 systemd[1]: libpod-conmon-afcec541f3a5ce3c8157a5a834ebd9eebb8ad69476f4bc68e9e80dce7316bd96.scope: Deactivated successfully.
Jan 22 08:46:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592157 python3.9[158840]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:46:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592157 podman[158880]: 2026-01-22 13:46:57.036074047 +0000 UTC m=+0.044629985 container create 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:46:57 np0005592157 systemd[1]: Started libpod-conmon-329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345.scope.
Jan 22 08:46:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:57 np0005592157 podman[158880]: 2026-01-22 13:46:57.018269162 +0000 UTC m=+0.026825110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c962a4b167fb3229864df94cd736c0e013e2243ea3855756280814315cae39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c962a4b167fb3229864df94cd736c0e013e2243ea3855756280814315cae39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c962a4b167fb3229864df94cd736c0e013e2243ea3855756280814315cae39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c962a4b167fb3229864df94cd736c0e013e2243ea3855756280814315cae39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:57 np0005592157 podman[158880]: 2026-01-22 13:46:57.131912358 +0000 UTC m=+0.140468406 container init 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:46:57 np0005592157 podman[158880]: 2026-01-22 13:46:57.143645791 +0000 UTC m=+0.152201759 container start 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:46:57 np0005592157 podman[158880]: 2026-01-22 13:46:57.148230175 +0000 UTC m=+0.156786203 container attach 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:46:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:57.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:46:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]: {
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:    "0": [
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:        {
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "devices": [
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "/dev/loop3"
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            ],
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "lv_name": "ceph_lv0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "lv_size": "7511998464",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "name": "ceph_lv0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "tags": {
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.cluster_name": "ceph",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.crush_device_class": "",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.encrypted": "0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.osd_id": "0",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.type": "block",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:                "ceph.vdo": "0"
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            },
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "type": "block",
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:            "vg_name": "ceph_vg0"
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:        }
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]:    ]
Jan 22 08:46:57 np0005592157 sharp_lichterman[158920]: }
Jan 22 08:46:57 np0005592157 systemd[1]: libpod-329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345.scope: Deactivated successfully.
Jan 22 08:46:58 np0005592157 podman[158880]: 2026-01-22 13:46:57.999851423 +0000 UTC m=+1.008407371 container died 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:46:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7c962a4b167fb3229864df94cd736c0e013e2243ea3855756280814315cae39f-merged.mount: Deactivated successfully.
Jan 22 08:46:58 np0005592157 podman[158880]: 2026-01-22 13:46:58.069227214 +0000 UTC m=+1.077783152 container remove 329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:46:58 np0005592157 systemd[1]: libpod-conmon-329814b3cc7975c43025fc9b35870fea96c586d62f02fcbaa9148240923fd345.scope: Deactivated successfully.
Jan 22 08:46:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:46:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:46:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:58 np0005592157 python3.9[159069]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:58 np0005592157 systemd[1]: Reloading.
Jan 22 08:46:58 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:58 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.810230323 +0000 UTC m=+0.069739991 container create bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.767468946 +0000 UTC m=+0.026978634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:58 np0005592157 systemd[1]: Started libpod-conmon-bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9.scope.
Jan 22 08:46:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.946231776 +0000 UTC m=+0.205741534 container init bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.95319794 +0000 UTC m=+0.212707618 container start bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:46:58 np0005592157 determined_pasteur[159285]: 167 167
Jan 22 08:46:58 np0005592157 systemd[1]: libpod-bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9.scope: Deactivated successfully.
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.971962028 +0000 UTC m=+0.231471756 container attach bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:46:58 np0005592157 podman[159245]: 2026-01-22 13:46:58.973662681 +0000 UTC m=+0.233172349 container died bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:46:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-af639ae5d722549e3ae88f4ca329dabbab6dc95503adc6e48ce59ee4e312242a-merged.mount: Deactivated successfully.
Jan 22 08:46:59 np0005592157 podman[159245]: 2026-01-22 13:46:59.085171953 +0000 UTC m=+0.344681621 container remove bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:46:59 np0005592157 podman[159314]: 2026-01-22 13:46:59.088301791 +0000 UTC m=+0.091641627 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 08:46:59 np0005592157 systemd[1]: libpod-conmon-bd6568cca93739e25a6288c2ddfefe20fecd1f126b22fb6a644452fad8cfd7f9.scope: Deactivated successfully.
Jan 22 08:46:59 np0005592157 podman[159386]: 2026-01-22 13:46:59.282902936 +0000 UTC m=+0.068325125 container create ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:46:59 np0005592157 podman[159386]: 2026-01-22 13:46:59.241165285 +0000 UTC m=+0.026587494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:46:59 np0005592157 systemd[1]: Started libpod-conmon-ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9.scope.
Jan 22 08:46:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:46:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a3d4af68687fbe3bd5422aaf5e0407482dd697bf52053ebcb9828830ca1c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a3d4af68687fbe3bd5422aaf5e0407482dd697bf52053ebcb9828830ca1c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a3d4af68687fbe3bd5422aaf5e0407482dd697bf52053ebcb9828830ca1c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a3d4af68687fbe3bd5422aaf5e0407482dd697bf52053ebcb9828830ca1c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:59 np0005592157 podman[159386]: 2026-01-22 13:46:59.400001138 +0000 UTC m=+0.185423347 container init ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:46:59 np0005592157 podman[159386]: 2026-01-22 13:46:59.411082355 +0000 UTC m=+0.196504554 container start ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:46:59 np0005592157 podman[159386]: 2026-01-22 13:46:59.420384647 +0000 UTC m=+0.205806866 container attach ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:46:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:46:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:59.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:59 np0005592157 python3.9[159482]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:46:59 np0005592157 network[159499]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:46:59 np0005592157 network[159500]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:46:59 np0005592157 network[159501]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:46:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 08:47:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:00 np0005592157 clever_bell[159450]: {
Jan 22 08:47:00 np0005592157 clever_bell[159450]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:47:00 np0005592157 clever_bell[159450]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:47:00 np0005592157 clever_bell[159450]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:47:00 np0005592157 clever_bell[159450]:        "osd_id": 0,
Jan 22 08:47:00 np0005592157 clever_bell[159450]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:47:00 np0005592157 clever_bell[159450]:        "type": "bluestore"
Jan 22 08:47:00 np0005592157 clever_bell[159450]:    }
Jan 22 08:47:00 np0005592157 clever_bell[159450]: }
Jan 22 08:47:00 np0005592157 podman[159386]: 2026-01-22 13:47:00.285104201 +0000 UTC m=+1.070526450 container died ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:47:00 np0005592157 systemd[1]: libpod-ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9.scope: Deactivated successfully.
Jan 22 08:47:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e2a3d4af68687fbe3bd5422aaf5e0407482dd697bf52053ebcb9828830ca1c0b-merged.mount: Deactivated successfully.
Jan 22 08:47:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:00 np0005592157 podman[159386]: 2026-01-22 13:47:00.65617806 +0000 UTC m=+1.441600279 container remove ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:47:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:47:00 np0005592157 systemd[1]: libpod-conmon-ff77e065481b4a917218f34dcb097eeb1fcf3aa7b791435eaea0eecdf1231dd9.scope: Deactivated successfully.
Jan 22 08:47:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:47:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 39e5b911-e8e3-45d4-bdb3-710e7fb93408 does not exist
Jan 22 08:47:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a1842fa9-eba5-4a80-ad9d-ba73dcf15a77 does not exist
Jan 22 08:47:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1932fd85-6db1-4969-afb4-e3b67d8a1065 does not exist
Jan 22 08:47:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:01.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 08:47:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:03.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:47:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 08:47:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 08:47:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:06.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:06 np0005592157 python3.9[159844]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 08:47:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:08 np0005592157 python3.9[159998]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:08.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:08 np0005592157 python3.9[160151]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 08:47:09 np0005592157 python3.9[160305]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:47:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:47:11 np0005592157 python3.9[160459]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 22 08:47:11 np0005592157 python3.9[160663]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:12 np0005592157 python3.9[160816]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 22 08:47:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:15 np0005592157 python3.9[160970]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:15 np0005592157 podman[161123]: 2026-01-22 13:47:15.742773686 +0000 UTC m=+0.071722421 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 08:47:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 08:47:15 np0005592157 python3.9[161124]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:16 np0005592157 python3.9[161294]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:17 np0005592157 python3.9[161446]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 08:47:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:17 np0005592157 python3.9[161599]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:18 np0005592157 python3.9[161751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:19 np0005592157 python3.9[161903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:19.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592157 python3.9[162056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:21 np0005592157 python3.9[162209]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:22.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:22 np0005592157 python3.9[162361]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592157 python3.9[162513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:24 np0005592157 python3.9[162666]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:24.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:24 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:25 np0005592157 python3.9[162818]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:25.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:25 np0005592157 python3.9[162971]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:26 np0005592157 python3.9[163123]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:27 np0005592157 python3.9[163276]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:47:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:28.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592157 podman[163429]: 2026-01-22 13:47:29.380251906 +0000 UTC m=+0.106038006 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 08:47:29 np0005592157 python3.9[163428]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:47:29 np0005592157 systemd[1]: Reloading.
Jan 22 08:47:29 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:47:29 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:47:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:30.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:30 np0005592157 python3.9[163641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:31 np0005592157 python3.9[163794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:32 np0005592157 python3.9[163998]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:32 np0005592157 python3.9[164151]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:33 np0005592157 python3.9[164305]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:34.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:34 np0005592157 python3.9[164458]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:35 np0005592157 python3.9[164611]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:37.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:38 np0005592157 python3.9[164766]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 08:47:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:39 np0005592157 python3.9[164919]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:47:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:39.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:40 np0005592157 python3.9[165078]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:47:40 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:47:40 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:47:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:41.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:42.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:42 np0005592157 python3.9[165240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:47:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:43.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:43 np0005592157 python3.9[165325]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:47:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:46 np0005592157 podman[165331]: 2026-01-22 13:47:46.37043333 +0000 UTC m=+0.088919860 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:47:47
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:47:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:47:47.553 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:47:47.553 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:47:47.554 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:47:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:47.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:47:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:47:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:49 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:47:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:49.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:47:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:47:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:52.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:47:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:54.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:47:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:47:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:47:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:47:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:00 np0005592157 podman[165588]: 2026-01-22 13:48:00.485189352 +0000 UTC m=+0.180783114 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:48:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:02.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ec8d80e9-a021-4336-8fc9-3ba1fb53e24c does not exist
Jan 22 08:48:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3af1b0a1-aef1-488a-b3db-afee64cd4416 does not exist
Jan 22 08:48:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1b9aba39-9173-4f4a-9957-ac9497329346 does not exist
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.033340326 +0000 UTC m=+0.050111795 container create 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:48:03 np0005592157 systemd[1]: Started libpod-conmon-9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9.scope.
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.015001519 +0000 UTC m=+0.031773018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:48:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:48:03 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.147244071 +0000 UTC m=+0.164015550 container init 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.157040836 +0000 UTC m=+0.173812315 container start 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.160396523 +0000 UTC m=+0.177167992 container attach 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:48:03 np0005592157 thirsty_mcclintock[165905]: 167 167
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.166103722 +0000 UTC m=+0.182875191 container died 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:48:03 np0005592157 systemd[1]: libpod-9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9.scope: Deactivated successfully.
Jan 22 08:48:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-922b4b0aa6d7313a20b7b1b9f0b47c33db0f5e229f70ce4d8450181cddfce68a-merged.mount: Deactivated successfully.
Jan 22 08:48:03 np0005592157 podman[165889]: 2026-01-22 13:48:03.213632379 +0000 UTC m=+0.230403848 container remove 9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:48:03 np0005592157 systemd[1]: libpod-conmon-9c723ffec61140951bc00850a28f0a0019b50938b4246a167e19c9077696bbd9.scope: Deactivated successfully.
Jan 22 08:48:03 np0005592157 podman[165931]: 2026-01-22 13:48:03.416254543 +0000 UTC m=+0.079949282 container create edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:48:03 np0005592157 systemd[1]: Started libpod-conmon-edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7.scope.
Jan 22 08:48:03 np0005592157 podman[165931]: 2026-01-22 13:48:03.393234274 +0000 UTC m=+0.056929033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:03 np0005592157 podman[165931]: 2026-01-22 13:48:03.512850907 +0000 UTC m=+0.176545666 container init edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:48:03 np0005592157 podman[165931]: 2026-01-22 13:48:03.525430515 +0000 UTC m=+0.189125254 container start edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:48:03 np0005592157 podman[165931]: 2026-01-22 13:48:03.530412614 +0000 UTC m=+0.194107353 container attach edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:48:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:03.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:04.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:04 np0005592157 relaxed_jang[165948]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:48:04 np0005592157 relaxed_jang[165948]: --> relative data size: 1.0
Jan 22 08:48:04 np0005592157 relaxed_jang[165948]: --> All data devices are unavailable
Jan 22 08:48:04 np0005592157 systemd[1]: libpod-edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7.scope: Deactivated successfully.
Jan 22 08:48:04 np0005592157 podman[165931]: 2026-01-22 13:48:04.454823484 +0000 UTC m=+1.118518263 container died edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:48:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9b7a2e64228f5978750cbe42a73ab3ac6997071429c1a95ec9aef1acf1626fcc-merged.mount: Deactivated successfully.
Jan 22 08:48:04 np0005592157 podman[165931]: 2026-01-22 13:48:04.548905593 +0000 UTC m=+1.212600362 container remove edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:48:04 np0005592157 systemd[1]: libpod-conmon-edee94aee0557ab66b0b82fd3bd9263036b4300bc20afdf79387a6e4303ca2e7.scope: Deactivated successfully.
Jan 22 08:48:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.313788762 +0000 UTC m=+0.065541377 container create b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:05 np0005592157 systemd[1]: Started libpod-conmon-b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914.scope.
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.289584852 +0000 UTC m=+0.041337537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.404366149 +0000 UTC m=+0.156118844 container init b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.413746504 +0000 UTC m=+0.165499109 container start b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.417198343 +0000 UTC m=+0.168951048 container attach b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 22 08:48:05 np0005592157 systemd[1]: libpod-b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914.scope: Deactivated successfully.
Jan 22 08:48:05 np0005592157 elegant_lehmann[166132]: 167 167
Jan 22 08:48:05 np0005592157 conmon[166132]: conmon b7b55ef3cc12b1f1df79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914.scope/container/memory.events
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.421097045 +0000 UTC m=+0.172849670 container died b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:48:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b45482c75227198f4a3f6097e331b9d1955f785fb4768816a9c3d683c9b30288-merged.mount: Deactivated successfully.
Jan 22 08:48:05 np0005592157 podman[166114]: 2026-01-22 13:48:05.467119933 +0000 UTC m=+0.218872558 container remove b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lehmann, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:05 np0005592157 systemd[1]: libpod-conmon-b7b55ef3cc12b1f1df792225976026464d10567eb9a4c610d2a52de4c740f914.scope: Deactivated successfully.
Jan 22 08:48:05 np0005592157 podman[166158]: 2026-01-22 13:48:05.65602848 +0000 UTC m=+0.051154643 container create d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:48:05 np0005592157 systemd[1]: Started libpod-conmon-d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504.scope.
Jan 22 08:48:05 np0005592157 podman[166158]: 2026-01-22 13:48:05.629868599 +0000 UTC m=+0.024994782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c578130d7e999119e5bcc9971bf8e2434fd5a824e4dc901222de923b3bb5b7c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c578130d7e999119e5bcc9971bf8e2434fd5a824e4dc901222de923b3bb5b7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c578130d7e999119e5bcc9971bf8e2434fd5a824e4dc901222de923b3bb5b7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c578130d7e999119e5bcc9971bf8e2434fd5a824e4dc901222de923b3bb5b7c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:05.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:05 np0005592157 podman[166158]: 2026-01-22 13:48:05.76436568 +0000 UTC m=+0.159491913 container init d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:48:05 np0005592157 podman[166158]: 2026-01-22 13:48:05.775423958 +0000 UTC m=+0.170550121 container start d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:48:05 np0005592157 podman[166158]: 2026-01-22 13:48:05.780030627 +0000 UTC m=+0.175156920 container attach d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:48:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:06.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]: {
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:    "0": [
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:        {
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "devices": [
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "/dev/loop3"
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            ],
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "lv_name": "ceph_lv0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "lv_size": "7511998464",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "name": "ceph_lv0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "tags": {
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.cluster_name": "ceph",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.crush_device_class": "",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.encrypted": "0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.osd_id": "0",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.type": "block",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:                "ceph.vdo": "0"
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            },
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "type": "block",
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:            "vg_name": "ceph_vg0"
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:        }
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]:    ]
Jan 22 08:48:06 np0005592157 nifty_heisenberg[166174]: }
Jan 22 08:48:06 np0005592157 systemd[1]: libpod-d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504.scope: Deactivated successfully.
Jan 22 08:48:06 np0005592157 podman[166158]: 2026-01-22 13:48:06.638726878 +0000 UTC m=+1.033853041 container died d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:48:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c578130d7e999119e5bcc9971bf8e2434fd5a824e4dc901222de923b3bb5b7c4-merged.mount: Deactivated successfully.
Jan 22 08:48:07 np0005592157 podman[166158]: 2026-01-22 13:48:07.121754251 +0000 UTC m=+1.516880414 container remove d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 08:48:07 np0005592157 systemd[1]: libpod-conmon-d5a6defd9a2d9d76e69c551475d6407124147c32d39a8464985229189c06f504.scope: Deactivated successfully.
Jan 22 08:48:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.799319396 +0000 UTC m=+0.056683286 container create 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:48:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:07 np0005592157 systemd[1]: Started libpod-conmon-93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122.scope.
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.769420658 +0000 UTC m=+0.026784578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.892405359 +0000 UTC m=+0.149769279 container init 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.900092479 +0000 UTC m=+0.157456389 container start 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.904255387 +0000 UTC m=+0.161619277 container attach 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:48:07 np0005592157 wonderful_allen[166356]: 167 167
Jan 22 08:48:07 np0005592157 systemd[1]: libpod-93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122.scope: Deactivated successfully.
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.910379427 +0000 UTC m=+0.167743377 container died 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-146c480a3c7344f9a454b882d7edb138bb2bf9401ff32f518d98e221dd00c47f-merged.mount: Deactivated successfully.
Jan 22 08:48:07 np0005592157 podman[166341]: 2026-01-22 13:48:07.960846701 +0000 UTC m=+0.218210601 container remove 93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:48:07 np0005592157 systemd[1]: libpod-conmon-93e784b06689592ab7747230bd52bf24658581eb05fd61e7eca5b8aabbafb122.scope: Deactivated successfully.
Jan 22 08:48:08 np0005592157 podman[166382]: 2026-01-22 13:48:08.178023794 +0000 UTC m=+0.042937569 container create 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:48:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:08 np0005592157 systemd[1]: Started libpod-conmon-1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e.scope.
Jan 22 08:48:08 np0005592157 podman[166382]: 2026-01-22 13:48:08.158837894 +0000 UTC m=+0.023751699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:48:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:48:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18163136b5c056afa4f3d89b6c3f5de3cccf2a0243ddf2299d5387e7d2c3872d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18163136b5c056afa4f3d89b6c3f5de3cccf2a0243ddf2299d5387e7d2c3872d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18163136b5c056afa4f3d89b6c3f5de3cccf2a0243ddf2299d5387e7d2c3872d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18163136b5c056afa4f3d89b6c3f5de3cccf2a0243ddf2299d5387e7d2c3872d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:48:08 np0005592157 podman[166382]: 2026-01-22 13:48:08.281073556 +0000 UTC m=+0.145987341 container init 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:08 np0005592157 podman[166382]: 2026-01-22 13:48:08.287807301 +0000 UTC m=+0.152721076 container start 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 08:48:08 np0005592157 podman[166382]: 2026-01-22 13:48:08.301749854 +0000 UTC m=+0.166663649 container attach 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:48:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:08.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]: {
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:        "osd_id": 0,
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:        "type": "bluestore"
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]:    }
Jan 22 08:48:09 np0005592157 lucid_rubin[166399]: }
Jan 22 08:48:09 np0005592157 systemd[1]: libpod-1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e.scope: Deactivated successfully.
Jan 22 08:48:09 np0005592157 podman[166382]: 2026-01-22 13:48:09.20983425 +0000 UTC m=+1.074748035 container died 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:48:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-18163136b5c056afa4f3d89b6c3f5de3cccf2a0243ddf2299d5387e7d2c3872d-merged.mount: Deactivated successfully.
Jan 22 08:48:09 np0005592157 podman[166382]: 2026-01-22 13:48:09.39619407 +0000 UTC m=+1.261107845 container remove 1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rubin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:48:09 np0005592157 systemd[1]: libpod-conmon-1d895731e5a06ddbad9b4f6ba39855e2deb1d5ab14b55ed73069853a6745fe0e.scope: Deactivated successfully.
Jan 22 08:48:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:48:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:09 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:09.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:48:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:12 np0005592157 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:12 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:14.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ce7d56c4-35d9-46d3-b14c-c182d8a818a4 does not exist
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ad0435d7-1171-4d4c-9dd6-74fe5ddf14ce does not exist
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 427c196c-b267-4b3f-9f50-f4292aae179e does not exist
Jan 22 08:48:16 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 08:48:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592157 podman[166548]: 2026-01-22 13:48:17.38324063 +0000 UTC m=+0.107567581 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 22 08:48:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:18.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:20.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:22 np0005592157 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:22 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:27.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:30.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 08:48:31 np0005592157 podman[166581]: 2026-01-22 13:48:31.390667348 +0000 UTC m=+0.107489499 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:48:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:31.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:32.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:33.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:34.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:36.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:37.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:38.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:39.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:40.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:41.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:42.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.873565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722873728, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2819, "num_deletes": 507, "total_data_size": 3976413, "memory_usage": 4053008, "flush_reason": "Manual Compaction"}
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722905017, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3880836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12929, "largest_seqno": 15747, "table_properties": {"data_size": 3869661, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 30436, "raw_average_key_size": 20, "raw_value_size": 3843024, "raw_average_value_size": 2563, "num_data_blocks": 276, "num_entries": 1499, "num_filter_entries": 1499, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089508, "oldest_key_time": 1769089508, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 31467 microseconds, and 13013 cpu microseconds.
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.905105) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3880836 bytes OK
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.905136) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906960) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906989) EVENT_LOG_v1 {"time_micros": 1769089722906985, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.907018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3963709, prev total WAL file size 3963709, number of live WAL files 2.
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.908549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3789KB)], [29(8116KB)]
Jan 22 08:48:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722908684, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12192296, "oldest_snapshot_seqno": -1}
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 5023 keys, 10032213 bytes, temperature: kUnknown
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089723012568, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 10032213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9996628, "index_size": 21907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125694, "raw_average_key_size": 25, "raw_value_size": 9903575, "raw_average_value_size": 1971, "num_data_blocks": 912, "num_entries": 5023, "num_filter_entries": 5023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.013385) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10032213 bytes
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.067816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.8 rd, 96.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6054, records dropped: 1031 output_compression: NoCompression
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.067880) EVENT_LOG_v1 {"time_micros": 1769089723067856, "job": 12, "event": "compaction_finished", "compaction_time_micros": 104356, "compaction_time_cpu_micros": 37502, "output_level": 6, "num_output_files": 1, "total_output_size": 10032213, "num_input_records": 6054, "num_output_records": 5023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089723068882, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089723070611, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:42.908346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.070740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.070746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.070749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.070751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:48:43.070753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:44.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:46.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:48:46 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:48:47
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'volumes']
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:48:47.554 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:48:47.555 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:48:47.555 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:48:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:48 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:48 np0005592157 podman[174055]: 2026-01-22 13:48:48.325054221 +0000 UTC m=+0.059479449 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 08:48:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:48.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:49 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:49.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:50.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:52.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:48:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:48:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:54.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:56.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:58.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:48:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:48:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:49:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:49:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:49:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:49:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:49:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:49:02 np0005592157 podman[182492]: 2026-01-22 13:49:02.379803481 +0000 UTC m=+0.098526875 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 08:49:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:49:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:49:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:49:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:49:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:04.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:06.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:07.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:08.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:09.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:09 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:13.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:14.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:16.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c2effde2-a125-419a-9cf3-cb33543aa337 does not exist
Jan 22 08:49:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0cee25bc-4e1c-42dd-ab5e-7cbe8a87306f does not exist
Jan 22 08:49:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 34f7e8ff-28fd-4037-add0-f84deeb6f9f1 does not exist
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:49:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:49:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:17.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:18 np0005592157 podman[184000]: 2026-01-22 13:49:18.541958485 +0000 UTC m=+0.029198549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:18 np0005592157 podman[184000]: 2026-01-22 13:49:18.674552579 +0000 UTC m=+0.161792613 container create 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:49:18 np0005592157 systemd[1]: Started libpod-conmon-0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4.scope.
Jan 22 08:49:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:49:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:19 np0005592157 podman[184000]: 2026-01-22 13:49:19.106186878 +0000 UTC m=+0.593426922 container init 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:49:19 np0005592157 podman[184000]: 2026-01-22 13:49:19.12068426 +0000 UTC m=+0.607924284 container start 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:49:19 np0005592157 stoic_cray[184028]: 167 167
Jan 22 08:49:19 np0005592157 systemd[1]: libpod-0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4.scope: Deactivated successfully.
Jan 22 08:49:19 np0005592157 podman[184000]: 2026-01-22 13:49:19.291184081 +0000 UTC m=+0.778424115 container attach 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 08:49:19 np0005592157 podman[184000]: 2026-01-22 13:49:19.291790179 +0000 UTC m=+0.779030213 container died 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:49:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-70b8d829e6985ccce9c38bb211111e8b5337a56b72bb29ef1b75ae4fd378388d-merged.mount: Deactivated successfully.
Jan 22 08:49:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:19 np0005592157 podman[184000]: 2026-01-22 13:49:19.936040762 +0000 UTC m=+1.423280786 container remove 0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:49:19 np0005592157 podman[184014]: 2026-01-22 13:49:19.998967294 +0000 UTC m=+1.269591545 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 08:49:20 np0005592157 systemd[1]: libpod-conmon-0005f7531e9d4d41bc7bb09c67cdc3a7098db28c99f74c5f13c53e7d045921f4.scope: Deactivated successfully.
Jan 22 08:49:20 np0005592157 podman[184067]: 2026-01-22 13:49:20.121519619 +0000 UTC m=+0.026881230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:21 np0005592157 podman[184067]: 2026-01-22 13:49:21.828452923 +0000 UTC m=+1.733814474 container create c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:49:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:22 np0005592157 systemd[1]: Started libpod-conmon-c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988.scope.
Jan 22 08:49:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:22 np0005592157 podman[184067]: 2026-01-22 13:49:22.456807433 +0000 UTC m=+2.362169074 container init c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:49:22 np0005592157 podman[184067]: 2026-01-22 13:49:22.464880583 +0000 UTC m=+2.370242134 container start c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:49:22 np0005592157 podman[184067]: 2026-01-22 13:49:22.834469907 +0000 UTC m=+2.739831468 container attach c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:23 np0005592157 kernel: SELinux:  Converting 2778 SID table entries...
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:49:23 np0005592157 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:49:23 np0005592157 boring_easley[184085]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:49:23 np0005592157 boring_easley[184085]: --> relative data size: 1.0
Jan 22 08:49:23 np0005592157 boring_easley[184085]: --> All data devices are unavailable
Jan 22 08:49:23 np0005592157 podman[184067]: 2026-01-22 13:49:23.439786421 +0000 UTC m=+3.345148002 container died c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:23 np0005592157 systemd[1]: libpod-c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988.scope: Deactivated successfully.
Jan 22 08:49:23 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 08:49:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:23.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d0bdb2e14141bc297355c5b49fbeedf21763cd055e05a9e7d66026ff5dd27730-merged.mount: Deactivated successfully.
Jan 22 08:49:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:24.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:25.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:26 np0005592157 podman[184067]: 2026-01-22 13:49:26.056209388 +0000 UTC m=+5.961570939 container remove c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:49:26 np0005592157 systemd[1]: libpod-conmon-c29e7ba369d51c8b50adaa12b7d0374e7fef0689658fea9d2be7ac4b23117988.scope: Deactivated successfully.
Jan 22 08:49:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:26.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:26 np0005592157 podman[184264]: 2026-01-22 13:49:26.844017441 +0000 UTC m=+0.026439547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:27 np0005592157 podman[184264]: 2026-01-22 13:49:27.093515972 +0000 UTC m=+0.275938088 container create e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:49:27 np0005592157 systemd[1]: Started libpod-conmon-e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9.scope.
Jan 22 08:49:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:27 np0005592157 podman[184264]: 2026-01-22 13:49:27.389792305 +0000 UTC m=+0.572214411 container init e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:49:27 np0005592157 podman[184264]: 2026-01-22 13:49:27.404530134 +0000 UTC m=+0.586952230 container start e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:49:27 np0005592157 podman[184264]: 2026-01-22 13:49:27.409911774 +0000 UTC m=+0.592333860 container attach e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:49:27 np0005592157 loving_lichterman[184280]: 167 167
Jan 22 08:49:27 np0005592157 systemd[1]: libpod-e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9.scope: Deactivated successfully.
Jan 22 08:49:27 np0005592157 podman[184264]: 2026-01-22 13:49:27.411510631 +0000 UTC m=+0.593932727 container died e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:49:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-adc817a6e1019ae64d6e5143bf153f27fa4836071356b3b229cd1aecabb1984b-merged.mount: Deactivated successfully.
Jan 22 08:49:28 np0005592157 podman[184264]: 2026-01-22 13:49:28.396899032 +0000 UTC m=+1.579321118 container remove e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:49:28 np0005592157 systemd[1]: libpod-conmon-e099f961e4773e8c80ac1fd71ccfcc57e1185837d5c333bbaa500f9b67235ed9.scope: Deactivated successfully.
Jan 22 08:49:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:28 np0005592157 podman[184307]: 2026-01-22 13:49:28.533438243 +0000 UTC m=+0.026948092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:28 np0005592157 podman[184307]: 2026-01-22 13:49:28.988252132 +0000 UTC m=+0.481761951 container create 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:49:29 np0005592157 systemd[1]: Started libpod-conmon-708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91.scope.
Jan 22 08:49:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f070d1461856560533a5635ec9fd418dca382d42772c607ede00cea65c73cdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f070d1461856560533a5635ec9fd418dca382d42772c607ede00cea65c73cdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f070d1461856560533a5635ec9fd418dca382d42772c607ede00cea65c73cdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f070d1461856560533a5635ec9fd418dca382d42772c607ede00cea65c73cdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:29 np0005592157 podman[184307]: 2026-01-22 13:49:29.39530336 +0000 UTC m=+0.888813239 container init 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:29 np0005592157 podman[184307]: 2026-01-22 13:49:29.409955156 +0000 UTC m=+0.903464985 container start 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 08:49:29 np0005592157 podman[184307]: 2026-01-22 13:49:29.457979044 +0000 UTC m=+0.951488893 container attach 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:29.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:30 np0005592157 practical_jackson[184322]: {
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:    "0": [
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:        {
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "devices": [
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "/dev/loop3"
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            ],
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "lv_name": "ceph_lv0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "lv_size": "7511998464",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "name": "ceph_lv0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "tags": {
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.cluster_name": "ceph",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.crush_device_class": "",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.encrypted": "0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.osd_id": "0",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.type": "block",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:                "ceph.vdo": "0"
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            },
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "type": "block",
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:            "vg_name": "ceph_vg0"
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:        }
Jan 22 08:49:30 np0005592157 practical_jackson[184322]:    ]
Jan 22 08:49:30 np0005592157 practical_jackson[184322]: }
Jan 22 08:49:30 np0005592157 systemd[1]: libpod-708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91.scope: Deactivated successfully.
Jan 22 08:49:30 np0005592157 podman[184307]: 2026-01-22 13:49:30.1978048 +0000 UTC m=+1.691314609 container died 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 22 08:49:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:30.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7f070d1461856560533a5635ec9fd418dca382d42772c607ede00cea65c73cdf-merged.mount: Deactivated successfully.
Jan 22 08:49:31 np0005592157 podman[184307]: 2026-01-22 13:49:31.720281096 +0000 UTC m=+3.213790905 container remove 708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:49:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592157 systemd[1]: libpod-conmon-708ea2a9c7a51e66bee318d507a932e553a8c40851cb39c50e7aafa4985eac91.scope: Deactivated successfully.
Jan 22 08:49:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:32 np0005592157 podman[184490]: 2026-01-22 13:49:32.399666525 +0000 UTC m=+0.039485445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:32 np0005592157 podman[184490]: 2026-01-22 13:49:32.837727276 +0000 UTC m=+0.477546166 container create c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:33 np0005592157 systemd[1]: Started libpod-conmon-c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef.scope.
Jan 22 08:49:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:33 np0005592157 podman[184490]: 2026-01-22 13:49:33.293104962 +0000 UTC m=+0.932923832 container init c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:49:33 np0005592157 podman[184506]: 2026-01-22 13:49:33.297553894 +0000 UTC m=+0.589502036 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:49:33 np0005592157 podman[184490]: 2026-01-22 13:49:33.302751598 +0000 UTC m=+0.942570478 container start c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:49:33 np0005592157 podman[184490]: 2026-01-22 13:49:33.309741086 +0000 UTC m=+0.949560026 container attach c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:33 np0005592157 hardcore_napier[184527]: 167 167
Jan 22 08:49:33 np0005592157 systemd[1]: libpod-c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef.scope: Deactivated successfully.
Jan 22 08:49:33 np0005592157 podman[184490]: 2026-01-22 13:49:33.314837258 +0000 UTC m=+0.954656128 container died c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:49:33 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:49:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5c924fa46d6f73cf7b24dbc8897eb860284775f5328b10d8a54589b6e440316-merged.mount: Deactivated successfully.
Jan 22 08:49:33 np0005592157 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Jan 22 08:49:33 np0005592157 podman[184490]: 2026-01-22 13:49:33.632018281 +0000 UTC m=+1.271837151 container remove c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 08:49:33 np0005592157 systemd[1]: libpod-conmon-c468927725113ecb2a47f3b8101e4f943b4f26ba7e5ab19456acda0e678416ef.scope: Deactivated successfully.
Jan 22 08:49:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:33 np0005592157 podman[184579]: 2026-01-22 13:49:33.908378082 +0000 UTC m=+0.078795905 container create 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 22 08:49:33 np0005592157 systemd[1]: Started libpod-conmon-3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917.scope.
Jan 22 08:49:33 np0005592157 podman[184579]: 2026-01-22 13:49:33.879476332 +0000 UTC m=+0.049894255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:49:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:49:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf376201bf2fac9d10bc4a09d8fb4694e2d5e05e48fa3f6bdc2bf0fc0101df9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf376201bf2fac9d10bc4a09d8fb4694e2d5e05e48fa3f6bdc2bf0fc0101df9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf376201bf2fac9d10bc4a09d8fb4694e2d5e05e48fa3f6bdc2bf0fc0101df9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf376201bf2fac9d10bc4a09d8fb4694e2d5e05e48fa3f6bdc2bf0fc0101df9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:49:34 np0005592157 podman[184579]: 2026-01-22 13:49:34.041154801 +0000 UTC m=+0.211572644 container init 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:49:34 np0005592157 podman[184579]: 2026-01-22 13:49:34.056180178 +0000 UTC m=+0.226598041 container start 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:49:34 np0005592157 podman[184579]: 2026-01-22 13:49:34.061225808 +0000 UTC m=+0.231643631 container attach 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 08:49:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:34 np0005592157 great_pare[184595]: {
Jan 22 08:49:34 np0005592157 great_pare[184595]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:49:34 np0005592157 great_pare[184595]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:49:34 np0005592157 great_pare[184595]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:49:34 np0005592157 great_pare[184595]:        "osd_id": 0,
Jan 22 08:49:34 np0005592157 great_pare[184595]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:49:34 np0005592157 great_pare[184595]:        "type": "bluestore"
Jan 22 08:49:34 np0005592157 great_pare[184595]:    }
Jan 22 08:49:34 np0005592157 great_pare[184595]: }
Jan 22 08:49:35 np0005592157 systemd[1]: libpod-3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917.scope: Deactivated successfully.
Jan 22 08:49:35 np0005592157 podman[184579]: 2026-01-22 13:49:35.007672731 +0000 UTC m=+1.178090554 container died 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:49:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2bf376201bf2fac9d10bc4a09d8fb4694e2d5e05e48fa3f6bdc2bf0fc0101df9-merged.mount: Deactivated successfully.
Jan 22 08:49:36 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:36 np0005592157 podman[184579]: 2026-01-22 13:49:36.829634036 +0000 UTC m=+3.000051859 container remove 3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:49:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:49:36 np0005592157 systemd[1]: libpod-conmon-3ffe54bd13de7ee20a2b087c2675b15da7bf5a82eb9988ef22f2540c1cec0917.scope: Deactivated successfully.
Jan 22 08:49:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:49:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:36 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c7407c1c-5bc9-46d8-9929-4a0f997cb840 does not exist
Jan 22 08:49:36 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 325116c9-ac2c-42b1-8645-63a7c1adc7ef does not exist
Jan 22 08:49:37 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d0a18a71-8351-4624-8c7e-7de9c2bc8c08 does not exist
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:39.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:40 np0005592157 auditd[703]: Audit daemon rotating log files
Jan 22 08:49:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:41.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:43.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:44 np0005592157 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 177322 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 177693 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 184025 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 184053 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 184960 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 184961 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Unit process 185386 (sshd-session) remains running after unit stopped.
Jan 22 08:49:44 np0005592157 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 08:49:44 np0005592157 systemd[1]: sshd.service: Consumed 4.122s CPU time, 42.2M memory peak, read 564.0K from disk, written 20.0K to disk.
Jan 22 08:49:44 np0005592157 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 08:49:44 np0005592157 systemd[1]: Stopping sshd-keygen.target...
Jan 22 08:49:44 np0005592157 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:44 np0005592157 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:44 np0005592157 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:44 np0005592157 systemd[1]: Reached target sshd-keygen.target.
Jan 22 08:49:44 np0005592157 systemd[1]: Starting OpenSSH server daemon...
Jan 22 08:49:44 np0005592157 systemd[1]: Started OpenSSH server daemon.
Jan 22 08:49:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000060s ======
Jan 22 08:49:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000060s
Jan 22 08:49:46 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:49:46 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:49:46 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:46 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:46 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:49:46 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:49:47
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'vms']
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:49:47.556 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:49:47.557 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:49:47.557 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:49:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:47.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:48.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:49 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:49.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:50 np0005592157 podman[189898]: 2026-01-22 13:49:50.34488751 +0000 UTC m=+0.073087815 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 08:49:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592157 python3.9[191223]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:51.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:51 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:53 np0005592157 python3.9[192350]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:53 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:53 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:53 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:53.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:54 np0005592157 python3.9[193700]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:49:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:49:54 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:54 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:54 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:55 np0005592157 python3.9[194564]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:55 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:55 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:55 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:56 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:49:56 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:49:56 np0005592157 systemd[1]: man-db-cache-update.service: Consumed 11.326s CPU time.
Jan 22 08:49:56 np0005592157 systemd[1]: run-r498e510b07e34c3b9b7cca8a1a5bac60.service: Deactivated successfully.
Jan 22 08:49:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:57 np0005592157 python3.9[195165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:49:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:49:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:58 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:58 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:58 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:58.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:58 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592157 python3.9[195406]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:59 np0005592157 systemd[1]: Reloading.
Jan 22 08:49:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:59 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:59 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:49:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:49:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:00.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:00 np0005592157 python3.9[195599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:01 np0005592157 systemd[1]: Reloading.
Jan 22 08:50:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:01 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:02 np0005592157 python3.9[195790]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:03 np0005592157 podman[195918]: 2026-01-22 13:50:03.600033519 +0000 UTC m=+0.115029186 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:50:03 np0005592157 python3.9[195961]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:03 np0005592157 systemd[1]: Reloading.
Jan 22 08:50:04 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:04 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:05 np0005592157 python3.9[196164]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:50:05 np0005592157 systemd[1]: Reloading.
Jan 22 08:50:05 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:05 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:05 np0005592157 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 08:50:05 np0005592157 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 08:50:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:05.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:06 np0005592157 python3.9[196361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592157 python3.9[196516]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:08 np0005592157 python3.9[196672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:08 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:09 np0005592157 python3.9[196829]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:10 np0005592157 python3.9[196987]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:10 np0005592157 python3.9[197142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:11 np0005592157 python3.9[197298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:12 np0005592157 python3.9[197453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:13 np0005592157 python3.9[197608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:14 np0005592157 python3.9[197765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:15 np0005592157 python3.9[197920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:15 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:16 np0005592157 python3.9[198076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:16.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592157 python3.9[198233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:18 np0005592157 python3.9[198439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:19 np0005592157 python3.9[198594]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592157 python3.9[198747]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:20 np0005592157 podman[198871]: 2026-01-22 13:50:20.730353691 +0000 UTC m=+0.101322334 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:50:20 np0005592157 python3.9[198914]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:21 np0005592157 python3.9[199071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:22 np0005592157 python3.9[199223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:23 np0005592157 python3.9[199375]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:23 np0005592157 python3.9[199526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:50:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:24 np0005592157 python3.9[199678]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:25 np0005592157 python3.9[199804]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089824.1391246-1646-276131029861030/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 22 08:50:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:25.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 22 08:50:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592157 python3.9[199956]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:27 np0005592157 python3.9[200081]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089825.9359713-1646-195504497229507/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:27 np0005592157 python3.9[200234]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:28 np0005592157 python3.9[200359]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089827.352463-1646-170580739657144/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:50:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:50:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:30 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:30 np0005592157 python3.9[200512]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:31 np0005592157 python3.9[200637]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089828.6088645-1646-51974359461381/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:32 np0005592157 python3.9[200790]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:32 np0005592157 python3.9[200915]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089831.4883711-1646-83791747068065/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:33 np0005592157 python3.9[201067]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:34 np0005592157 podman[201165]: 2026-01-22 13:50:34.000067217 +0000 UTC m=+0.121836097 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 08:50:34 np0005592157 python3.9[201212]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089832.8949525-1646-129968021683980/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:50:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:50:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:34 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:34 np0005592157 python3.9[201371]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:35 np0005592157 python3.9[201494]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089834.3418288-1646-114074130220072/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:36 np0005592157 python3.9[201647]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:36 np0005592157 python3.9[201772]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089835.73425-1646-272763225283614/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:37 np0005592157 python3.9[201925]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 08:50:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:38.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:50:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:50:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:38 np0005592157 python3.9[202257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 054b6494-7222-4dfc-9edc-8534bf96da86 does not exist
Jan 22 08:50:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5759fda5-4268-40b2-b43a-5c2373909c7d does not exist
Jan 22 08:50:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9cb3d22d-d6fb-45de-aaa1-3687a301f82b does not exist
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:50:39 np0005592157 python3.9[202510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.746106844 +0000 UTC m=+0.046196696 container create 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:50:39 np0005592157 systemd[1]: Started libpod-conmon-0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf.scope.
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.725002786 +0000 UTC m=+0.025092668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.849375995 +0000 UTC m=+0.149465897 container init 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.857801306 +0000 UTC m=+0.157891188 container start 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:50:39 np0005592157 systemd[1]: libpod-0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf.scope: Deactivated successfully.
Jan 22 08:50:39 np0005592157 pensive_satoshi[202593]: 167 167
Jan 22 08:50:39 np0005592157 conmon[202593]: conmon 0441af67caf7b1e940b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf.scope/container/memory.events
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.878482513 +0000 UTC m=+0.178572405 container attach 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.879086658 +0000 UTC m=+0.179176540 container died 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 22 08:50:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-527e74c8022314af8c6638644aaedb063801ea83945070512f8b587e11db8ee2-merged.mount: Deactivated successfully.
Jan 22 08:50:39 np0005592157 podman[202551]: 2026-01-22 13:50:39.989325514 +0000 UTC m=+0.289415406 container remove 0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:50:40 np0005592157 systemd[1]: libpod-conmon-0441af67caf7b1e940b3b1e62907a476ccd92809344f68a874b55e0e114f38cf.scope: Deactivated successfully.
Jan 22 08:50:40 np0005592157 podman[202739]: 2026-01-22 13:50:40.19517739 +0000 UTC m=+0.045179631 container create 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:50:40 np0005592157 systemd[1]: Started libpod-conmon-50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a.scope.
Jan 22 08:50:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:40.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:40 np0005592157 podman[202739]: 2026-01-22 13:50:40.176572775 +0000 UTC m=+0.026575036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:40 np0005592157 podman[202739]: 2026-01-22 13:50:40.307968149 +0000 UTC m=+0.157970400 container init 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:50:40 np0005592157 podman[202739]: 2026-01-22 13:50:40.31800392 +0000 UTC m=+0.168006151 container start 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:50:40 np0005592157 podman[202739]: 2026-01-22 13:50:40.32120361 +0000 UTC m=+0.171206081 container attach 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:50:40 np0005592157 python3.9[202750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:50:41 np0005592157 hopeful_mcnulty[202760]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:50:41 np0005592157 hopeful_mcnulty[202760]: --> relative data size: 1.0
Jan 22 08:50:41 np0005592157 hopeful_mcnulty[202760]: --> All data devices are unavailable
Jan 22 08:50:41 np0005592157 systemd[1]: libpod-50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a.scope: Deactivated successfully.
Jan 22 08:50:41 np0005592157 podman[202739]: 2026-01-22 13:50:41.151879735 +0000 UTC m=+1.001882006 container died 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:50:41 np0005592157 python3.9[202916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8febe071c9dc775dd10c3a18b7c85308ace542afd9b4303d9ef9fc43d2347a35-merged.mount: Deactivated successfully.
Jan 22 08:50:41 np0005592157 podman[202739]: 2026-01-22 13:50:41.270291715 +0000 UTC m=+1.120293956 container remove 50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:50:41 np0005592157 systemd[1]: libpod-conmon-50dd0971ecc8eea3bf284888ad134f4e6b6ad542f1405f9860c737d7e50b470a.scope: Deactivated successfully.
Jan 22 08:50:41 np0005592157 python3.9[203192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:41 np0005592157 podman[203232]: 2026-01-22 13:50:41.936476118 +0000 UTC m=+0.042757400 container create 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:41.915581466 +0000 UTC m=+0.021862768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:42 np0005592157 systemd[1]: Started libpod-conmon-5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356.scope.
Jan 22 08:50:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:42.070299463 +0000 UTC m=+0.176580735 container init 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:42.081425731 +0000 UTC m=+0.187707033 container start 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:50:42 np0005592157 sad_heisenberg[203272]: 167 167
Jan 22 08:50:42 np0005592157 systemd[1]: libpod-5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356.scope: Deactivated successfully.
Jan 22 08:50:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:42.100257732 +0000 UTC m=+0.206539014 container attach 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:42.100617141 +0000 UTC m=+0.206898403 container died 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:50:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cafe77424fdd2506ba6642c71cc80d073279e1172e2e426ef1b260f42ae515ed-merged.mount: Deactivated successfully.
Jan 22 08:50:42 np0005592157 podman[203232]: 2026-01-22 13:50:42.1417789 +0000 UTC m=+0.248060202 container remove 5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:50:42 np0005592157 systemd[1]: libpod-conmon-5316611b0dfb4b2c41593133d091af8ea0a4a32571f74254f21d07eb01c7f356.scope: Deactivated successfully.
Jan 22 08:50:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:42 np0005592157 podman[203370]: 2026-01-22 13:50:42.374151469 +0000 UTC m=+0.077086038 container create 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:50:42 np0005592157 systemd[1]: Started libpod-conmon-36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68.scope.
Jan 22 08:50:42 np0005592157 podman[203370]: 2026-01-22 13:50:42.34460543 +0000 UTC m=+0.047540079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51fe92d0d76a574c9b817b60136767f519c7f1aa5fe31c22647744e7e4b9c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51fe92d0d76a574c9b817b60136767f519c7f1aa5fe31c22647744e7e4b9c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51fe92d0d76a574c9b817b60136767f519c7f1aa5fe31c22647744e7e4b9c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51fe92d0d76a574c9b817b60136767f519c7f1aa5fe31c22647744e7e4b9c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:42 np0005592157 podman[203370]: 2026-01-22 13:50:42.485293737 +0000 UTC m=+0.188228336 container init 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:50:42 np0005592157 podman[203370]: 2026-01-22 13:50:42.500810055 +0000 UTC m=+0.203744624 container start 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:50:42 np0005592157 podman[203370]: 2026-01-22 13:50:42.504798295 +0000 UTC m=+0.207732864 container attach 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:50:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:42.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:42 np0005592157 python3.9[203443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:43 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:43 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]: {
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:    "0": [
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:        {
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "devices": [
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "/dev/loop3"
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            ],
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "lv_name": "ceph_lv0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "lv_size": "7511998464",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "name": "ceph_lv0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "tags": {
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.cluster_name": "ceph",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.crush_device_class": "",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.encrypted": "0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.osd_id": "0",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.type": "block",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:                "ceph.vdo": "0"
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            },
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "type": "block",
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:            "vg_name": "ceph_vg0"
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:        }
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]:    ]
Jan 22 08:50:43 np0005592157 vibrant_khayyam[203420]: }
Jan 22 08:50:43 np0005592157 systemd[1]: libpod-36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68.scope: Deactivated successfully.
Jan 22 08:50:43 np0005592157 podman[203370]: 2026-01-22 13:50:43.415009819 +0000 UTC m=+1.117944418 container died 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:50:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e51fe92d0d76a574c9b817b60136767f519c7f1aa5fe31c22647744e7e4b9c86-merged.mount: Deactivated successfully.
Jan 22 08:50:43 np0005592157 python3.9[203598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592157 podman[203370]: 2026-01-22 13:50:43.548580698 +0000 UTC m=+1.251515257 container remove 36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:50:43 np0005592157 systemd[1]: libpod-conmon-36d95bac344357747471c65cf80ffa30d7cdf002f8aca274ee2a27373db20a68.scope: Deactivated successfully.
Jan 22 08:50:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:44 np0005592157 python3.9[203867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.268911665 +0000 UTC m=+0.054763060 container create ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:50:44 np0005592157 systemd[1]: Started libpod-conmon-ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8.scope.
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.249716275 +0000 UTC m=+0.035567690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.372062333 +0000 UTC m=+0.157913748 container init ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.382473263 +0000 UTC m=+0.168324668 container start ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.387016287 +0000 UTC m=+0.172867682 container attach ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:50:44 np0005592157 kind_curran[203965]: 167 167
Jan 22 08:50:44 np0005592157 systemd[1]: libpod-ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8.scope: Deactivated successfully.
Jan 22 08:50:44 np0005592157 conmon[203965]: conmon ae6c389cc030a706e850 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8.scope/container/memory.events
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.392979146 +0000 UTC m=+0.178830571 container died ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:50:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c1f09879ce779f621d8558a408a4187a85bd258fdb03897b782013d0e07d42fc-merged.mount: Deactivated successfully.
Jan 22 08:50:44 np0005592157 podman[203907]: 2026-01-22 13:50:44.43915388 +0000 UTC m=+0.225005295 container remove ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:50:44 np0005592157 systemd[1]: libpod-conmon-ae6c389cc030a706e8500767a2c27e969ec509c4424e5d464709e2d7939853b8.scope: Deactivated successfully.
Jan 22 08:50:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:44 np0005592157 podman[204068]: 2026-01-22 13:50:44.686442162 +0000 UTC m=+0.101880828 container create 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:50:44 np0005592157 podman[204068]: 2026-01-22 13:50:44.620077253 +0000 UTC m=+0.035515939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:50:44 np0005592157 systemd[1]: Started libpod-conmon-415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60.scope.
Jan 22 08:50:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:50:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1bb3cc3ce4aba0579197c6cfb02dfae1ba1a4a2c937513fb5febae95ba2cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1bb3cc3ce4aba0579197c6cfb02dfae1ba1a4a2c937513fb5febae95ba2cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1bb3cc3ce4aba0579197c6cfb02dfae1ba1a4a2c937513fb5febae95ba2cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1bb3cc3ce4aba0579197c6cfb02dfae1ba1a4a2c937513fb5febae95ba2cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:50:44 np0005592157 podman[204068]: 2026-01-22 13:50:44.77875119 +0000 UTC m=+0.194189876 container init 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:50:44 np0005592157 podman[204068]: 2026-01-22 13:50:44.793021776 +0000 UTC m=+0.208460442 container start 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 08:50:44 np0005592157 podman[204068]: 2026-01-22 13:50:44.796974115 +0000 UTC m=+0.212412801 container attach 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:50:44 np0005592157 python3.9[204112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:45 np0005592157 python3.9[204272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:45 np0005592157 elated_galois[204115]: {
Jan 22 08:50:45 np0005592157 elated_galois[204115]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:50:45 np0005592157 elated_galois[204115]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:50:45 np0005592157 elated_galois[204115]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:50:45 np0005592157 elated_galois[204115]:        "osd_id": 0,
Jan 22 08:50:45 np0005592157 elated_galois[204115]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:50:45 np0005592157 elated_galois[204115]:        "type": "bluestore"
Jan 22 08:50:45 np0005592157 elated_galois[204115]:    }
Jan 22 08:50:45 np0005592157 elated_galois[204115]: }
Jan 22 08:50:45 np0005592157 systemd[1]: libpod-415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60.scope: Deactivated successfully.
Jan 22 08:50:45 np0005592157 podman[204068]: 2026-01-22 13:50:45.712643824 +0000 UTC m=+1.128082490 container died 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:50:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ca1bb3cc3ce4aba0579197c6cfb02dfae1ba1a4a2c937513fb5febae95ba2cf1-merged.mount: Deactivated successfully.
Jan 22 08:50:45 np0005592157 podman[204068]: 2026-01-22 13:50:45.832344397 +0000 UTC m=+1.247783073 container remove 415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:50:45 np0005592157 systemd[1]: libpod-conmon-415f1159cdf4ad654770f7da8611f78e1401b92c51cde27d806d408fff3e1b60.scope: Deactivated successfully.
Jan 22 08:50:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:50:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:46 np0005592157 python3.9[204453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:50:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:47 np0005592157 python3.9[204605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:50:47
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr']
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:50:47.557 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:50:47.559 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:50:47.560 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:50:47 np0005592157 python3.9[204758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:48 np0005592157 python3.9[204910]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:48.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:50:49 np0005592157 python3.9[205062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:49 np0005592157 python3.9[205186]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089848.7064624-2309-109717288622046/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:50 np0005592157 python3.9[205338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:51 np0005592157 podman[205433]: 2026-01-22 13:50:51.143085185 +0000 UTC m=+0.077905989 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 08:50:51 np0005592157 python3.9[205478]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089850.1648028-2309-214843160070249/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:52 np0005592157 python3.9[205634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7de2dee8-ff78-456b-a239-3fed4b11babb does not exist
Jan 22 08:50:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f8d596d8-e858-4b89-9d39-4d72149a31c6 does not exist
Jan 22 08:50:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 80a74ddf-258e-4c8a-abb6-6224840101ee does not exist
Jan 22 08:50:52 np0005592157 python3.9[205778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089851.6955237-2309-29839997029721/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:53 np0005592157 python3.9[205959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:54 np0005592157 python3.9[206083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089852.973785-2309-191093544631361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:54.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:54 np0005592157 python3.9[206235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:55 np0005592157 python3.9[206358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089854.3350177-2309-110246379497518/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:55 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:56 np0005592157 python3.9[206511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:50:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:56.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:50:56 np0005592157 python3.9[206634]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089855.5784318-2309-231209733208537/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592157 python3.9[206786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:50:57 np0005592157 python3.9[206960]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089856.8658774-2309-158487700319672/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.103882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858104189, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1593, "num_deletes": 252, "total_data_size": 2413621, "memory_usage": 2448264, "flush_reason": "Manual Compaction"}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858117256, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1454881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15748, "largest_seqno": 17340, "table_properties": {"data_size": 1449261, "index_size": 2567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17004, "raw_average_key_size": 21, "raw_value_size": 1435883, "raw_average_value_size": 1831, "num_data_blocks": 112, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089723, "oldest_key_time": 1769089723, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13570 microseconds, and 6698 cpu microseconds.
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.117529) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1454881 bytes OK
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.117577) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119803) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119855) EVENT_LOG_v1 {"time_micros": 1769089858119847, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119882) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2406502, prev total WAL file size 2406502, number of live WAL files 2.
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1420KB)], [32(9797KB)]
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858121743, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 11487094, "oldest_snapshot_seqno": -1}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5349 keys, 8489984 bytes, temperature: kUnknown
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858199444, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8489984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8455503, "index_size": 20035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 133815, "raw_average_key_size": 25, "raw_value_size": 8359725, "raw_average_value_size": 1562, "num_data_blocks": 828, "num_entries": 5349, "num_filter_entries": 5349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.200223) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8489984 bytes
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.201936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.3 rd, 108.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.7) write-amplify(5.8) OK, records in: 5807, records dropped: 458 output_compression: NoCompression
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.201959) EVENT_LOG_v1 {"time_micros": 1769089858201947, "job": 14, "event": "compaction_finished", "compaction_time_micros": 77997, "compaction_time_cpu_micros": 42169, "output_level": 6, "num_output_files": 1, "total_output_size": 8489984, "num_input_records": 5807, "num_output_records": 5349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858202384, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858204546, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.204726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.204737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.204739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.204742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:50:58.204744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:50:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:58 np0005592157 python3.9[207112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:59 np0005592157 python3.9[207235]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089858.1027308-2309-136094298667302/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:59 np0005592157 python3.9[207388]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:00 np0005592157 python3.9[207511]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089859.3229861-2309-83759767272815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:01 np0005592157 python3.9[207663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:01 np0005592157 python3.9[207787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089860.6591694-2309-262899044900144/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:02.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:02 np0005592157 python3.9[207939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:03 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:03 np0005592157 python3.9[208064]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089862.1501076-2309-161732890375949/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:51:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:04 np0005592157 python3.9[208217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:04 np0005592157 podman[208241]: 2026-01-22 13:51:04.398079342 +0000 UTC m=+0.121438077 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 08:51:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:04 np0005592157 python3.9[208366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089863.5640116-2309-13620450347588/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:05 np0005592157 python3.9[208519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:06 np0005592157 python3.9[208642]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089865.0248084-2309-224742433634013/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:06 np0005592157 python3.9[208794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:07 np0005592157 python3.9[208917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089866.36764-2309-213724740204952/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:07 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:08 np0005592157 python3.9[209068]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:08.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:09 np0005592157 python3.9[209223]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 08:51:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:10 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:51:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:51:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:12 np0005592157 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 08:51:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:12 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:12 np0005592157 python3.9[209381]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:13 np0005592157 python3.9[209534]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:14 np0005592157 python3.9[209686]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:15 np0005592157 python3.9[209838]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:15 np0005592157 python3.9[209991]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:17 np0005592157 python3.9[210143]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592157 python3.9[210296]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:18.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:18 np0005592157 python3.9[210498]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:18.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:18 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:18 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:19 np0005592157 python3.9[210650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592157 python3.9[210803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:20.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:20.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:20 np0005592157 python3.9[210955]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:21 np0005592157 systemd[1]: Reloading.
Jan 22 08:51:21 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:21 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:21 np0005592157 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 08:51:21 np0005592157 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 08:51:21 np0005592157 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 08:51:21 np0005592157 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 08:51:21 np0005592157 systemd[1]: Starting libvirt logging daemon...
Jan 22 08:51:21 np0005592157 podman[210992]: 2026-01-22 13:51:21.390747408 +0000 UTC m=+0.067758958 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 08:51:21 np0005592157 systemd[1]: Started libvirt logging daemon.
Jan 22 08:51:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:22 np0005592157 python3.9[211165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:22 np0005592157 systemd[1]: Reloading.
Jan 22 08:51:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:22.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:22 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:22 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:22.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:22 np0005592157 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 08:51:22 np0005592157 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 08:51:22 np0005592157 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 08:51:22 np0005592157 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 08:51:22 np0005592157 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 08:51:22 np0005592157 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 08:51:22 np0005592157 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:51:22 np0005592157 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:51:23 np0005592157 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 08:51:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:23 np0005592157 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 08:51:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:23 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:23 np0005592157 python3.9[211382]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:23 np0005592157 systemd[1]: Reloading.
Jan 22 08:51:23 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:23 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:23 np0005592157 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 08:51:23 np0005592157 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 08:51:23 np0005592157 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 08:51:23 np0005592157 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 08:51:23 np0005592157 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 08:51:23 np0005592157 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 08:51:23 np0005592157 systemd[1]: Starting libvirt proxy daemon...
Jan 22 08:51:23 np0005592157 systemd[1]: Started libvirt proxy daemon.
Jan 22 08:51:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:24 np0005592157 python3.9[211603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:24 np0005592157 setroubleshoot[211329]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3c08b556-d7b0-4b8f-95a6-19641d39fa3c
Jan 22 08:51:24 np0005592157 systemd[1]: Reloading.
Jan 22 08:51:24 np0005592157 setroubleshoot[211329]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592157 setroubleshoot[211329]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3c08b556-d7b0-4b8f-95a6-19641d39fa3c
Jan 22 08:51:24 np0005592157 setroubleshoot[211329]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:24 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:25 np0005592157 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 08:51:25 np0005592157 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 08:51:25 np0005592157 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 08:51:25 np0005592157 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 08:51:25 np0005592157 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 08:51:25 np0005592157 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 08:51:25 np0005592157 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 08:51:25 np0005592157 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 08:51:25 np0005592157 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 08:51:25 np0005592157 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 08:51:25 np0005592157 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:51:25 np0005592157 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:51:25 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:26 np0005592157 python3.9[211820]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:26 np0005592157 systemd[1]: Reloading.
Jan 22 08:51:26 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:26 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:26 np0005592157 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 08:51:26 np0005592157 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 08:51:26 np0005592157 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 08:51:26 np0005592157 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 08:51:26 np0005592157 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 08:51:26 np0005592157 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 08:51:26 np0005592157 systemd[1]: Starting libvirt secret daemon...
Jan 22 08:51:26 np0005592157 systemd[1]: Started libvirt secret daemon.
Jan 22 08:51:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:27 np0005592157 python3.9[212031]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:28 np0005592157 python3.9[212184]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:28 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:29 np0005592157 python3.9[212336]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:30 np0005592157 python3.9[212491]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:30.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:31 np0005592157 python3.9[212641]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.502129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891502270, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 601, "num_deletes": 251, "total_data_size": 615035, "memory_usage": 627568, "flush_reason": "Manual Compaction"}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891508857, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 605960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17341, "largest_seqno": 17941, "table_properties": {"data_size": 602925, "index_size": 943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 19, "raw_value_size": 596493, "raw_average_value_size": 1469, "num_data_blocks": 42, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089858, "oldest_key_time": 1769089858, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6821 microseconds, and 3657 cpu microseconds.
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.508916) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 605960 bytes OK
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.508989) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.510614) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.510637) EVENT_LOG_v1 {"time_micros": 1769089891510630, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.510660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 611729, prev total WAL file size 611729, number of live WAL files 2.
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.511578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(591KB)], [35(8291KB)]
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891511687, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9095944, "oldest_snapshot_seqno": -1}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 5244 keys, 7419019 bytes, temperature: kUnknown
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891583703, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7419019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7386062, "index_size": 18767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132507, "raw_average_key_size": 25, "raw_value_size": 7292765, "raw_average_value_size": 1390, "num_data_blocks": 771, "num_entries": 5244, "num_filter_entries": 5244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.583990) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7419019 bytes
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.585879) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 102.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.2) OK, records in: 5755, records dropped: 511 output_compression: NoCompression
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.585901) EVENT_LOG_v1 {"time_micros": 1769089891585891, "job": 16, "event": "compaction_finished", "compaction_time_micros": 72086, "compaction_time_cpu_micros": 27480, "output_level": 6, "num_output_files": 1, "total_output_size": 7419019, "num_input_records": 5755, "num_output_records": 5244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891586198, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891588142, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.511378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.588173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.588177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.588179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.588180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:51:31.588182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592157 python3.9[212763]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089890.7526574-3383-184365801912733/.source.xml follow=False _original_basename=secret.xml.j2 checksum=661e779e9ad9ab9796e6f7af12c5e6a2862cccb5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:32 np0005592157 python3.9[212915]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 088fe176-0106-5401-803c-2da38b73b76a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:32 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:33 np0005592157 python3.9[213077]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:34.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:34 np0005592157 podman[213355]: 2026-01-22 13:51:34.823763508 +0000 UTC m=+0.188359358 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:51:34 np0005592157 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 08:51:34 np0005592157 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.022s CPU time.
Jan 22 08:51:34 np0005592157 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 08:51:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:35 np0005592157 python3.9[213569]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:36.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:36 np0005592157 python3.9[213721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:37 np0005592157 python3.9[213844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089896.1411889-3548-238256492154561/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:38 np0005592157 python3.9[214047]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:39 np0005592157 python3.9[214199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:39 np0005592157 python3.9[214278]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592157 python3.9[214430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:40 np0005592157 python3.9[214508]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1ctutzn5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:41 np0005592157 python3.9[214661]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:42 np0005592157 python3.9[214739]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:51:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:51:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:43 np0005592157 python3.9[214891]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:44 np0005592157 python3[215045]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:51:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:44.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:44 np0005592157 python3.9[215197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:45 np0005592157 python3.9[215275]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:46 np0005592157 python3.9[215428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:51:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:46 np0005592157 python3.9[215553]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089905.4932382-3815-79325065986080/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:51:47
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', 'images']
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:51:47 np0005592157 python3.9[215705]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:51:47.558 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:51:47.559 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:51:47.559 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:51:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:48 np0005592157 python3.9[215784]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:48 np0005592157 python3.9[215936]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:49 np0005592157 python3.9[216014]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:50 np0005592157 python3.9[216169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:50.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:50 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592157 python3.9[216294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089909.506105-3932-10835298268292/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:52 np0005592157 podman[216419]: 2026-01-22 13:51:52.061979828 +0000 UTC m=+0.089383869 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 08:51:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592157 python3.9[216466]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:53 np0005592157 python3.9[216619]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:53 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:53 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:54 np0005592157 python3.9[216955]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:54 np0005592157 podman[216868]: 2026-01-22 13:51:54.387637351 +0000 UTC m=+0.658478789 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:51:54 np0005592157 podman[216868]: 2026-01-22 13:51:54.554351366 +0000 UTC m=+0.825192804 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:51:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:54 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:55 np0005592157 python3.9[217181]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:55 np0005592157 podman[217300]: 2026-01-22 13:51:55.619102347 +0000 UTC m=+0.164831568 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:51:55 np0005592157 python3.9[217438]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:51:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:56 np0005592157 podman[217300]: 2026-01-22 13:51:56.092548032 +0000 UTC m=+0.638277253 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 08:51:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:51:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:51:56 np0005592157 podman[217522]: 2026-01-22 13:51:56.533201166 +0000 UTC m=+0.071904992 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.display-name=Keepalived on RHEL 9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 08:51:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:56 np0005592157 podman[217571]: 2026-01-22 13:51:56.642581035 +0000 UTC m=+0.086132428 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2)
Jan 22 08:51:56 np0005592157 podman[217522]: 2026-01-22 13:51:56.81736843 +0000 UTC m=+0.356072266 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Jan 22 08:51:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:51:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3903 writes, 18K keys, 3903 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3903 writes, 3903 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1750 writes, 8215 keys, 1750 commit groups, 1.0 writes per commit group, ingest: 10.69 MB, 0.02 MB/s#012Interval WAL: 1750 writes, 1750 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.4      0.21              0.09         8    0.026       0      0       0.0       0.0#012  L6      1/0    7.08 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   2.9    128.9    105.0      0.56              0.25         7    0.080     35K   3845       0.0       0.0#012 Sum      1/0    7.08 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.9     93.4    102.6      0.77              0.34        15    0.051     35K   3845       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.2    104.0    101.8      0.61              0.28        12    0.051     31K   3553       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    128.9    105.0      0.56              0.25         7    0.080     35K   3845       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.1      0.21              0.09         7    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.07 MB/s write, 0.07 GB read, 0.06 MB/s read, 0.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 4.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00012 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(223,3.97 MB,1.3068%) FilterBlock(16,109.61 KB,0.0352107%) IndexBlock(16,191.05 KB,0.0613715%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:51:57 np0005592157 python3.9[217659]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:51:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:51:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:51:58 np0005592157 python3.9[217815]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:58.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:51:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:51:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:51:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:58 np0005592157 python3.9[218017]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:51:59 np0005592157 python3.9[218259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089918.2687888-4148-192780466761991/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:00 np0005592157 python3.9[218424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:00.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2550fe96-948b-4fb2-8867-588b93cf8ec7 does not exist
Jan 22 08:52:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1e7df711-80c3-4ac3-99f2-6c00b9b5f86a does not exist
Jan 22 08:52:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b16ef3a3-0d31-4863-a916-70cffa77e7ff does not exist
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:52:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:52:01 np0005592157 python3.9[218642]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089919.7181299-4193-131472664098845/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.279918222 +0000 UTC m=+0.056601709 container create 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:52:01 np0005592157 systemd[1]: Started libpod-conmon-9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525.scope.
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.255677165 +0000 UTC m=+0.032360682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.417835785 +0000 UTC m=+0.194519322 container init 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.431152169 +0000 UTC m=+0.207835676 container start 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.437680142 +0000 UTC m=+0.214363629 container attach 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:52:01 np0005592157 practical_colden[218772]: 167 167
Jan 22 08:52:01 np0005592157 systemd[1]: libpod-9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525.scope: Deactivated successfully.
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.441444636 +0000 UTC m=+0.218128123 container died 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 22 08:52:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0adb66aaee549398b867abf86eb316355f47069a405518baf268a20195b101ee-merged.mount: Deactivated successfully.
Jan 22 08:52:01 np0005592157 podman[218711]: 2026-01-22 13:52:01.5102694 +0000 UTC m=+0.286952897 container remove 9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:52:01 np0005592157 systemd[1]: libpod-conmon-9e8194cfacd3e1816c022dfdfc82fad21d18c53ffc7975efeb8bc07a82ef5525.scope: Deactivated successfully.
Jan 22 08:52:01 np0005592157 podman[218805]: 2026-01-22 13:52:01.722715659 +0000 UTC m=+0.085658316 container create 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:52:01 np0005592157 podman[218805]: 2026-01-22 13:52:01.684323498 +0000 UTC m=+0.047266175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:01 np0005592157 systemd[1]: Started libpod-conmon-79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d.scope.
Jan 22 08:52:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:01 np0005592157 podman[218805]: 2026-01-22 13:52:01.939368084 +0000 UTC m=+0.302310721 container init 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:52:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:01 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:01 np0005592157 podman[218805]: 2026-01-22 13:52:01.946200795 +0000 UTC m=+0.309143412 container start 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:52:01 np0005592157 podman[218805]: 2026-01-22 13:52:01.949221111 +0000 UTC m=+0.312163738 container attach 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 08:52:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:02.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:02 np0005592157 musing_herschel[218821]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:52:02 np0005592157 musing_herschel[218821]: --> relative data size: 1.0
Jan 22 08:52:02 np0005592157 musing_herschel[218821]: --> All data devices are unavailable
Jan 22 08:52:02 np0005592157 systemd[1]: libpod-79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d.scope: Deactivated successfully.
Jan 22 08:52:02 np0005592157 podman[218805]: 2026-01-22 13:52:02.807577664 +0000 UTC m=+1.170520331 container died 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:52:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b4e212a243105915f54b833a95c7b6fbfa73bf5d9ffc8aa0b4d81cc651f682f2-merged.mount: Deactivated successfully.
Jan 22 08:52:03 np0005592157 podman[218805]: 2026-01-22 13:52:03.09135955 +0000 UTC m=+1.454302217 container remove 79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:52:03 np0005592157 systemd[1]: libpod-conmon-79af0a1d13b638691d056d14b75a4a1b97c63cc773a49ebc7175eb525996467d.scope: Deactivated successfully.
Jan 22 08:52:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:03 np0005592157 python3.9[218923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:52:03 np0005592157 python3.9[219149]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089921.262857-4238-133675845278485/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:03 np0005592157 podman[219191]: 2026-01-22 13:52:03.84375977 +0000 UTC m=+0.036433453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:04 np0005592157 podman[219191]: 2026-01-22 13:52:04.21241903 +0000 UTC m=+0.405092753 container create e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 08:52:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:04 np0005592157 python3.9[219356]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:04 np0005592157 systemd[1]: Reloading.
Jan 22 08:52:04 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:04 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:05 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:05 np0005592157 systemd[1]: Started libpod-conmon-e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d.scope.
Jan 22 08:52:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:05 np0005592157 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 08:52:05 np0005592157 podman[219191]: 2026-01-22 13:52:05.777292845 +0000 UTC m=+1.969966618 container init e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:52:05 np0005592157 podman[219191]: 2026-01-22 13:52:05.791975513 +0000 UTC m=+1.984649236 container start e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:52:05 np0005592157 brave_dubinsky[219395]: 167 167
Jan 22 08:52:05 np0005592157 systemd[1]: libpod-e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d.scope: Deactivated successfully.
Jan 22 08:52:05 np0005592157 podman[219191]: 2026-01-22 13:52:05.938251685 +0000 UTC m=+2.130925458 container attach e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:52:05 np0005592157 podman[219191]: 2026-01-22 13:52:05.940291637 +0000 UTC m=+2.132965340 container died e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:52:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ea2337abdf2773835c364b91d001d83f9acbd41c2eea7e14f0225acc0c94f8dd-merged.mount: Deactivated successfully.
Jan 22 08:52:06 np0005592157 podman[219191]: 2026-01-22 13:52:06.090838186 +0000 UTC m=+2.283511909 container remove e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:52:06 np0005592157 podman[219396]: 2026-01-22 13:52:06.092685732 +0000 UTC m=+0.601505952 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:52:06 np0005592157 systemd[1]: libpod-conmon-e0843aefb9483742b63c2c9d5095752f80dc6a04e6e41c25e8793c672c39cc9d.scope: Deactivated successfully.
Jan 22 08:52:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:06 np0005592157 podman[219597]: 2026-01-22 13:52:06.427551848 +0000 UTC m=+0.179091026 container create fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 08:52:06 np0005592157 podman[219597]: 2026-01-22 13:52:06.390450679 +0000 UTC m=+0.141989927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:06 np0005592157 systemd[1]: Started libpod-conmon-fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320.scope.
Jan 22 08:52:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c59df993fd77933a0910584434f429df20965ac590a88a30319b685b0b847/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c59df993fd77933a0910584434f429df20965ac590a88a30319b685b0b847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c59df993fd77933a0910584434f429df20965ac590a88a30319b685b0b847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c59df993fd77933a0910584434f429df20965ac590a88a30319b685b0b847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:06 np0005592157 podman[219597]: 2026-01-22 13:52:06.561565003 +0000 UTC m=+0.313104191 container init fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:52:06 np0005592157 podman[219597]: 2026-01-22 13:52:06.56901297 +0000 UTC m=+0.320552128 container start fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:52:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:06 np0005592157 podman[219597]: 2026-01-22 13:52:06.612757325 +0000 UTC m=+0.364296483 container attach fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:52:06 np0005592157 python3.9[219600]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:52:06 np0005592157 systemd[1]: Reloading.
Jan 22 08:52:06 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:06 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:07 np0005592157 systemd[1]: Reloading.
Jan 22 08:52:07 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:07 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]: {
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:    "0": [
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:        {
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "devices": [
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "/dev/loop3"
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            ],
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "lv_name": "ceph_lv0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "lv_size": "7511998464",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "name": "ceph_lv0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "tags": {
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.cluster_name": "ceph",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.crush_device_class": "",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.encrypted": "0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.osd_id": "0",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.type": "block",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:                "ceph.vdo": "0"
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            },
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "type": "block",
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:            "vg_name": "ceph_vg0"
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:        }
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]:    ]
Jan 22 08:52:07 np0005592157 fervent_khorana[219613]: }
Jan 22 08:52:07 np0005592157 podman[219597]: 2026-01-22 13:52:07.417557657 +0000 UTC m=+1.169096805 container died fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:52:07 np0005592157 systemd[1]: libpod-fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320.scope: Deactivated successfully.
Jan 22 08:52:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5a4c59df993fd77933a0910584434f429df20965ac590a88a30319b685b0b847-merged.mount: Deactivated successfully.
Jan 22 08:52:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:08 np0005592157 podman[219597]: 2026-01-22 13:52:08.16811462 +0000 UTC m=+1.919653808 container remove fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:52:08 np0005592157 systemd[1]: libpod-conmon-fb07b7687a2fe1df785af03b048d5c9da5c69925327a28dff5bc87e3f0882320.scope: Deactivated successfully.
Jan 22 08:52:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:08 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.019698104 +0000 UTC m=+0.029334176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.235459237 +0000 UTC m=+0.245095319 container create 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:52:09 np0005592157 systemd[1]: Started libpod-conmon-8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc.scope.
Jan 22 08:52:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.435269151 +0000 UTC m=+0.444905273 container init 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.449893177 +0000 UTC m=+0.459529229 container start 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 08:52:09 np0005592157 interesting_greider[219886]: 167 167
Jan 22 08:52:09 np0005592157 systemd[1]: libpod-8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc.scope: Deactivated successfully.
Jan 22 08:52:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.609131434 +0000 UTC m=+0.618767596 container attach 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.609590826 +0000 UTC m=+0.619226878 container died 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:52:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e7ecb39824cc68ccef364352b69a26eae3b433c49125b7eb973f8cb38bf6b435-merged.mount: Deactivated successfully.
Jan 22 08:52:09 np0005592157 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 08:52:09 np0005592157 systemd[1]: session-49.scope: Consumed 3min 46.030s CPU time.
Jan 22 08:52:09 np0005592157 systemd-logind[785]: Session 49 logged out. Waiting for processes to exit.
Jan 22 08:52:09 np0005592157 systemd-logind[785]: Removed session 49.
Jan 22 08:52:09 np0005592157 podman[219870]: 2026-01-22 13:52:09.853621106 +0000 UTC m=+0.863257178 container remove 8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:52:09 np0005592157 systemd[1]: libpod-conmon-8416e45674c7d2ca9259608487aa1741510ba182dade73d089debaf36a3eb2dc.scope: Deactivated successfully.
Jan 22 08:52:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:10 np0005592157 podman[219911]: 2026-01-22 13:52:10.060322921 +0000 UTC m=+0.091355308 container create e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:52:10 np0005592157 podman[219911]: 2026-01-22 13:52:09.99318905 +0000 UTC m=+0.024221467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:52:10 np0005592157 systemd[1]: Started libpod-conmon-e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2.scope.
Jan 22 08:52:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.005000125s ======
Jan 22 08:52:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000125s
Jan 22 08:52:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:52:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376a439797c1854e4d5a8431698851f54a069713d3b8a7a511997e8e6b56ade4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376a439797c1854e4d5a8431698851f54a069713d3b8a7a511997e8e6b56ade4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376a439797c1854e4d5a8431698851f54a069713d3b8a7a511997e8e6b56ade4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376a439797c1854e4d5a8431698851f54a069713d3b8a7a511997e8e6b56ade4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:52:10 np0005592157 podman[219911]: 2026-01-22 13:52:10.522507474 +0000 UTC m=+0.553539971 container init e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:52:10 np0005592157 podman[219911]: 2026-01-22 13:52:10.537662694 +0000 UTC m=+0.568695091 container start e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 08:52:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:10.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:10 np0005592157 podman[219911]: 2026-01-22 13:52:10.891869383 +0000 UTC m=+0.922901810 container attach e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:52:11 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]: {
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:        "osd_id": 0,
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:        "type": "bluestore"
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]:    }
Jan 22 08:52:11 np0005592157 gallant_rubin[219927]: }
Jan 22 08:52:11 np0005592157 systemd[1]: libpod-e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2.scope: Deactivated successfully.
Jan 22 08:52:11 np0005592157 podman[219911]: 2026-01-22 13:52:11.505001055 +0000 UTC m=+1.536033482 container died e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:52:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-376a439797c1854e4d5a8431698851f54a069713d3b8a7a511997e8e6b56ade4-merged.mount: Deactivated successfully.
Jan 22 08:52:11 np0005592157 podman[219911]: 2026-01-22 13:52:11.861910814 +0000 UTC m=+1.892943211 container remove e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:52:11 np0005592157 systemd[1]: libpod-conmon-e61ec1cc536fa9e01bdb5fa65b719b8dc8ef7df8053c2d507f46b42b2774d4b2.scope: Deactivated successfully.
Jan 22 08:52:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:52:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:52:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e95931c5-5a2d-41f8-aad2-91e612c9357d does not exist
Jan 22 08:52:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c8df2e59-1686-4a54-aa4c-907d2675b2e2 does not exist
Jan 22 08:52:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2cafc598-27bb-4b27-a6b5-842b08b4a4aa does not exist
Jan 22 08:52:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:14.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:14 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:14.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:15 np0005592157 systemd-logind[785]: New session 50 of user zuul.
Jan 22 08:52:15 np0005592157 systemd[1]: Started Session 50 of User zuul.
Jan 22 08:52:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:15 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:16 np0005592157 python3.9[220168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:52:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:16 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:16 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:17 np0005592157 python3.9[220323]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:18 np0005592157 network[220340]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:18 np0005592157 network[220341]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:18 np0005592157 network[220342]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.657054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938657260, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 710, "num_deletes": 250, "total_data_size": 889944, "memory_usage": 904376, "flush_reason": "Manual Compaction"}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938668646, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 868137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17943, "largest_seqno": 18651, "table_properties": {"data_size": 864527, "index_size": 1390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8225, "raw_average_key_size": 17, "raw_value_size": 856944, "raw_average_value_size": 1862, "num_data_blocks": 61, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089891, "oldest_key_time": 1769089891, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 11622 microseconds, and 6728 cpu microseconds.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.668758) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 868137 bytes OK
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.668802) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.671112) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.671147) EVENT_LOG_v1 {"time_micros": 1769089938671140, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.671179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 886176, prev total WAL file size 902571, number of live WAL files 2.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.672035) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(847KB)], [38(7245KB)]
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938672142, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 8287156, "oldest_snapshot_seqno": -1}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5192 keys, 7743971 bytes, temperature: kUnknown
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938753783, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7743971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711007, "index_size": 18902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133549, "raw_average_key_size": 25, "raw_value_size": 7618238, "raw_average_value_size": 1467, "num_data_blocks": 757, "num_entries": 5192, "num_filter_entries": 5192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.754156) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7743971 bytes
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.755800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.4 rd, 94.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(18.5) write-amplify(8.9) OK, records in: 5704, records dropped: 512 output_compression: NoCompression
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.755839) EVENT_LOG_v1 {"time_micros": 1769089938755821, "job": 18, "event": "compaction_finished", "compaction_time_micros": 81716, "compaction_time_cpu_micros": 26243, "output_level": 6, "num_output_files": 1, "total_output_size": 7743971, "num_input_records": 5704, "num_output_records": 5192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938756433, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938759279, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.671869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.759329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.759334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.759337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.759340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:52:18.759343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:19 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:52:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:52:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:22 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:22 np0005592157 podman[220539]: 2026-01-22 13:52:22.368792821 +0000 UTC m=+0.092746283 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:52:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:23 np0005592157 python3.9[220685]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:52:23 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:24 np0005592157 python3.9[220770]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:25 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:28 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:29 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:29 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:30.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:30 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:31 np0005592157 python3.9[220926]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:31 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:32 np0005592157 python3.9[221079]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:32.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:32 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592157 python3.9[221232]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:33 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:33 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:33 np0005592157 python3.9[221385]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:34 np0005592157 python3.9[221538]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:35 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:35 np0005592157 python3.9[221663]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089954.2328353-245-218338149102731/.source.iscsi _original_basename=.ji91ufk4 follow=False checksum=563ccfb41a9c836842f255de4f606c2ab272f37c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:36 np0005592157 podman[221788]: 2026-01-22 13:52:36.402793359 +0000 UTC m=+0.126793536 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 08:52:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:36.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:36 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:36 np0005592157 python3.9[221836]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:36.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:37 np0005592157 python3.9[221995]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:37 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:38 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:38.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:38 np0005592157 python3.9[222147]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:39 np0005592157 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 08:52:39 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:39 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:40 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:40 np0005592157 python3.9[222354]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:40 np0005592157 systemd[1]: Reloading.
Jan 22 08:52:41 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:41 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:41 np0005592157 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:52:41 np0005592157 systemd[1]: Starting Open-iSCSI...
Jan 22 08:52:41 np0005592157 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 08:52:41 np0005592157 systemd[1]: Started Open-iSCSI.
Jan 22 08:52:41 np0005592157 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 08:52:41 np0005592157 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 08:52:41 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:42.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:42 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592157 python3.9[222554]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:42 np0005592157 network[222571]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:42 np0005592157 network[222572]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:42 np0005592157 network[222573]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:43 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:44 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:45 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:46.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:52:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:47 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:52:47
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:52:47.560 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:52:47.561 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:52:47.561 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:52:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:48 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:48 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:49 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:50 np0005592157 python3.9[222849]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:50.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:51 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:52 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:52 np0005592157 podman[222862]: 2026-01-22 13:52:52.560950006 +0000 UTC m=+0.083770939 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:52:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:52.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:52 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:52:52 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:52:52 np0005592157 systemd[1]: Reloading.
Jan 22 08:52:52 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:52 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:53 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:52:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:53 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:53 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:52:53 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:52:53 np0005592157 systemd[1]: run-raa3d054599e845c7b7ec842364ffaabb.service: Deactivated successfully.
Jan 22 08:52:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:54 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:54 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:54.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:55 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:56 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:57 np0005592157 python3.9[223190]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:52:57 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:52:58 np0005592157 python3.9[223343]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 08:52:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:52:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:58.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:52:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:52:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:58 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:58 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592157 python3.9[223499]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:59 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592157 python3.9[223673]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089978.5254083-509-223643909815295/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:00 np0005592157 python3.9[223825]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:01 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:02 np0005592157 python3.9[223978]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:02 np0005592157 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:02 np0005592157 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:02 np0005592157 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:02 np0005592157 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:02 np0005592157 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:02.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:02 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:03 np0005592157 python3.9[224134]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:53:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:53:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:04 np0005592157 python3.9[224288]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592157 python3.9[224440]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:06 np0005592157 python3.9[224564]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089984.904002-662-88825981762910/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:06 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:07 np0005592157 podman[224688]: 2026-01-22 13:53:07.014895249 +0000 UTC m=+0.158806248 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:53:07 np0005592157 python3.9[224731]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:07 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592157 python3.9[224894]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:53:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:53:08 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:09 np0005592157 python3.9[225046]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:09 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:09 np0005592157 python3.9[225199]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:10 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:10 np0005592157 python3.9[225351]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:11 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:11 np0005592157 python3.9[225504]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:12 np0005592157 python3.9[225656]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:12 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:13 np0005592157 python3.9[225808]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:13 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:13 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:14 np0005592157 python3.9[225984]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:14 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 08:53:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 08:53:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:15 np0005592157 python3.9[226248]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:16 np0005592157 python3.9[226402]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:16 np0005592157 systemd[1]: Listening on multipathd control socket.
Jan 22 08:53:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 03d0833f-fbeb-4668-af44-8fb3eda84824 does not exist
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f3efb69c-5214-4a40-9dd2-9d6a7180b308 does not exist
Jan 22 08:53:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a535c00a-0b20-40f7-bba6-505b6037ecc5 does not exist
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:53:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:53:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:53:17 np0005592157 python3.9[226658]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.41752812 +0000 UTC m=+0.052813264 container create 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:53:17 np0005592157 systemd[1]: Started libpod-conmon-952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67.scope.
Jan 22 08:53:17 np0005592157 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.386425031 +0000 UTC m=+0.021710205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:17 np0005592157 udevadm[226725]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 08:53:17 np0005592157 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 08:53:17 np0005592157 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.52658225 +0000 UTC m=+0.161867414 container init 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.537034692 +0000 UTC m=+0.172319856 container start 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.542193081 +0000 UTC m=+0.177478255 container attach 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:53:17 np0005592157 sad_booth[226723]: 167 167
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.545256668 +0000 UTC m=+0.180541842 container died 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 08:53:17 np0005592157 systemd[1]: libpod-952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67.scope: Deactivated successfully.
Jan 22 08:53:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4ba276ffe6e58c8bfeefcb0679b19d579181142f7eb5de7f548ac11d42c99861-merged.mount: Deactivated successfully.
Jan 22 08:53:17 np0005592157 multipathd[226731]: --------start up--------
Jan 22 08:53:17 np0005592157 multipathd[226731]: read /etc/multipath.conf
Jan 22 08:53:17 np0005592157 multipathd[226731]: path checkers start up
Jan 22 08:53:17 np0005592157 podman[226703]: 2026-01-22 13:53:17.600486541 +0000 UTC m=+0.235771685 container remove 952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:53:17 np0005592157 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:17 np0005592157 systemd[1]: libpod-conmon-952466bd1052399ac1e304da9e86f0b28a1e328c14e36107467b6d15f2ebdb67.scope: Deactivated successfully.
Jan 22 08:53:17 np0005592157 podman[226778]: 2026-01-22 13:53:17.816282084 +0000 UTC m=+0.077051750 container create b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:53:17 np0005592157 systemd[1]: Started libpod-conmon-b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330.scope.
Jan 22 08:53:17 np0005592157 podman[226778]: 2026-01-22 13:53:17.784617911 +0000 UTC m=+0.045387627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:17 np0005592157 podman[226778]: 2026-01-22 13:53:17.947139711 +0000 UTC m=+0.207909427 container init b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 08:53:17 np0005592157 podman[226778]: 2026-01-22 13:53:17.959417488 +0000 UTC m=+0.220187154 container start b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:53:17 np0005592157 podman[226778]: 2026-01-22 13:53:17.963317566 +0000 UTC m=+0.224087232 container attach b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:53:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:18 np0005592157 strange_tesla[226798]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:53:18 np0005592157 strange_tesla[226798]: --> relative data size: 1.0
Jan 22 08:53:18 np0005592157 strange_tesla[226798]: --> All data devices are unavailable
Jan 22 08:53:18 np0005592157 systemd[1]: libpod-b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330.scope: Deactivated successfully.
Jan 22 08:53:18 np0005592157 podman[226778]: 2026-01-22 13:53:18.824836659 +0000 UTC m=+1.085606325 container died b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:53:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-381bb5ad40841094479002b2cb25354c13f78298946bb55fd58f43b846e85f62-merged.mount: Deactivated successfully.
Jan 22 08:53:18 np0005592157 podman[226778]: 2026-01-22 13:53:18.884649026 +0000 UTC m=+1.145418652 container remove b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 08:53:18 np0005592157 systemd[1]: libpod-conmon-b3e02f7c924135d3a27d136b0ca2f7e82a086e30da36031de55f3c13fe7d7330.scope: Deactivated successfully.
Jan 22 08:53:19 np0005592157 python3.9[226949]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:53:19 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:19 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.584193012 +0000 UTC m=+0.046700900 container create 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:53:19 np0005592157 systemd[1]: Started libpod-conmon-6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164.scope.
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.563468973 +0000 UTC m=+0.025976841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.687393156 +0000 UTC m=+0.149901014 container init 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.69953051 +0000 UTC m=+0.162038378 container start 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.703156251 +0000 UTC m=+0.165664119 container attach 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:53:19 np0005592157 nifty_mestorf[227282]: 167 167
Jan 22 08:53:19 np0005592157 systemd[1]: libpod-6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164.scope: Deactivated successfully.
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.706531035 +0000 UTC m=+0.169038903 container died 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:53:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-aed5c91ed9f618ff47b154da27cf2f2b3003dd2b50567f31b90cd26b4e954e26-merged.mount: Deactivated successfully.
Jan 22 08:53:19 np0005592157 podman[227243]: 2026-01-22 13:53:19.746035774 +0000 UTC m=+0.208543652 container remove 6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:53:19 np0005592157 systemd[1]: libpod-conmon-6fd8e5fcb1f16fd763e2b17dce301e08bca03e3320b71038ab9e852b98f41164.scope: Deactivated successfully.
Jan 22 08:53:19 np0005592157 python3.9[227314]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 08:53:19 np0005592157 kernel: Key type psk registered
Jan 22 08:53:19 np0005592157 podman[227335]: 2026-01-22 13:53:19.94115738 +0000 UTC m=+0.046998098 container create 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:53:19 np0005592157 systemd[1]: Started libpod-conmon-67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca.scope.
Jan 22 08:53:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8723fe0a55d2111bcee04c7838802ac6f237085c33fecfae2df9da9086b776e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:20 np0005592157 podman[227335]: 2026-01-22 13:53:19.922894583 +0000 UTC m=+0.028735301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8723fe0a55d2111bcee04c7838802ac6f237085c33fecfae2df9da9086b776e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8723fe0a55d2111bcee04c7838802ac6f237085c33fecfae2df9da9086b776e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8723fe0a55d2111bcee04c7838802ac6f237085c33fecfae2df9da9086b776e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:20 np0005592157 podman[227335]: 2026-01-22 13:53:20.036849346 +0000 UTC m=+0.142690114 container init 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:53:20 np0005592157 podman[227335]: 2026-01-22 13:53:20.046656152 +0000 UTC m=+0.152496860 container start 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:53:20 np0005592157 podman[227335]: 2026-01-22 13:53:20.04978639 +0000 UTC m=+0.155627118 container attach 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:53:20 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:20 np0005592157 practical_johnson[227361]: {
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:    "0": [
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:        {
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "devices": [
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "/dev/loop3"
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            ],
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "lv_name": "ceph_lv0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "lv_size": "7511998464",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "name": "ceph_lv0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "tags": {
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.cluster_name": "ceph",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.crush_device_class": "",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.encrypted": "0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.osd_id": "0",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.type": "block",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:                "ceph.vdo": "0"
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            },
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "type": "block",
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:            "vg_name": "ceph_vg0"
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:        }
Jan 22 08:53:20 np0005592157 practical_johnson[227361]:    ]
Jan 22 08:53:20 np0005592157 practical_johnson[227361]: }
Jan 22 08:53:20 np0005592157 python3.9[227517]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:20 np0005592157 systemd[1]: libpod-67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca.scope: Deactivated successfully.
Jan 22 08:53:20 np0005592157 podman[227522]: 2026-01-22 13:53:20.877783994 +0000 UTC m=+0.032425853 container died 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:53:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8723fe0a55d2111bcee04c7838802ac6f237085c33fecfae2df9da9086b776e2-merged.mount: Deactivated successfully.
Jan 22 08:53:20 np0005592157 podman[227522]: 2026-01-22 13:53:20.954096904 +0000 UTC m=+0.108738673 container remove 67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:53:20 np0005592157 systemd[1]: libpod-conmon-67013091651bc3dd4007a4f52891633d45b79f462f3a07d93a78850b0a25f2ca.scope: Deactivated successfully.
Jan 22 08:53:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:21 np0005592157 python3.9[227736]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769090000.2045417-1052-142490849816192/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.724711221 +0000 UTC m=+0.067834520 container create 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:53:21 np0005592157 systemd[1]: Started libpod-conmon-8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7.scope.
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.696073564 +0000 UTC m=+0.039196903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.834487909 +0000 UTC m=+0.177611268 container init 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.847882895 +0000 UTC m=+0.191006194 container start 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.851971107 +0000 UTC m=+0.195094416 container attach 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:53:21 np0005592157 goofy_matsumoto[227859]: 167 167
Jan 22 08:53:21 np0005592157 systemd[1]: libpod-8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7.scope: Deactivated successfully.
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.855365452 +0000 UTC m=+0.198488751 container died 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:53:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e3f814ea768d03e42e12b16719188b595830f19b5958c317b24378e32fbe6b4a-merged.mount: Deactivated successfully.
Jan 22 08:53:21 np0005592157 podman[227824]: 2026-01-22 13:53:21.903943289 +0000 UTC m=+0.247066578 container remove 8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:53:21 np0005592157 systemd[1]: libpod-conmon-8575843f8e63d7515a781c620a91ed072765ed2d3f5dda8bc7f29d342a1accd7.scope: Deactivated successfully.
Jan 22 08:53:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:22 np0005592157 podman[227963]: 2026-01-22 13:53:22.098689615 +0000 UTC m=+0.043180402 container create f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:53:22 np0005592157 systemd[1]: Started libpod-conmon-f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1.scope.
Jan 22 08:53:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:53:22 np0005592157 podman[227963]: 2026-01-22 13:53:22.077791882 +0000 UTC m=+0.022282729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:53:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad125658ea7da684dd1d55d783d1435f61a46bf01b92ababfbb9ab12085b2628/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad125658ea7da684dd1d55d783d1435f61a46bf01b92ababfbb9ab12085b2628/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad125658ea7da684dd1d55d783d1435f61a46bf01b92ababfbb9ab12085b2628/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad125658ea7da684dd1d55d783d1435f61a46bf01b92ababfbb9ab12085b2628/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:53:22 np0005592157 podman[227963]: 2026-01-22 13:53:22.208563586 +0000 UTC m=+0.153054393 container init f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:53:22 np0005592157 podman[227963]: 2026-01-22 13:53:22.221036629 +0000 UTC m=+0.165527436 container start f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:53:22 np0005592157 podman[227963]: 2026-01-22 13:53:22.224583737 +0000 UTC m=+0.169074544 container attach f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:53:22 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:22 np0005592157 python3.9[228009]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:22 np0005592157 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 08:53:22 np0005592157 podman[228090]: 2026-01-22 13:53:22.809722995 +0000 UTC m=+0.082741649 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]: {
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:        "osd_id": 0,
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:        "type": "bluestore"
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]:    }
Jan 22 08:53:23 np0005592157 romantic_yalow[228007]: }
Jan 22 08:53:23 np0005592157 systemd[1]: libpod-f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1.scope: Deactivated successfully.
Jan 22 08:53:23 np0005592157 podman[227963]: 2026-01-22 13:53:23.119501187 +0000 UTC m=+1.063992054 container died f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 08:53:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ad125658ea7da684dd1d55d783d1435f61a46bf01b92ababfbb9ab12085b2628-merged.mount: Deactivated successfully.
Jan 22 08:53:23 np0005592157 podman[227963]: 2026-01-22 13:53:23.196591904 +0000 UTC m=+1.141082701 container remove f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:53:23 np0005592157 systemd[1]: libpod-conmon-f80a5389a6d24bd31caae611af76899bd96e7606ff65eb3bfae5c8a53eb782e1.scope: Deactivated successfully.
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 28f09a93-96d0-4fc3-a7c7-ac24ff7e78bb does not exist
Jan 22 08:53:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6fefcadd-6cbd-4ea9-9872-b711b466b761 does not exist
Jan 22 08:53:23 np0005592157 python3.9[228193]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aacfe164-77bc-4b50-9f27-244969288770 does not exist
Jan 22 08:53:23 np0005592157 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:23 np0005592157 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:23 np0005592157 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:23 np0005592157 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:23 np0005592157 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592157 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 08:53:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:24 np0005592157 python3.9[228421]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:53:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:25 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:26 np0005592157 systemd[1]: Reloading.
Jan 22 08:53:26 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:26 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:53:27 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:27 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:27 np0005592157 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 08:53:27 np0005592157 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 08:53:27 np0005592157 lvm[228538]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:53:27 np0005592157 lvm[228538]: VG ceph_vg0 finished
Jan 22 08:53:27 np0005592157 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:53:27 np0005592157 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:53:27 np0005592157 systemd[1]: Reloading.
Jan 22 08:53:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:28 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:28 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:28 np0005592157 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:53:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:28 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:28 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:29 np0005592157 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:53:29 np0005592157 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:53:29 np0005592157 systemd[1]: man-db-cache-update.service: Consumed 1.719s CPU time.
Jan 22 08:53:29 np0005592157 systemd[1]: run-r9afd7b2351d04e9f939064cbe4211d2a.service: Deactivated successfully.
Jan 22 08:53:29 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:30 np0005592157 python3.9[229885]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:30 np0005592157 systemd[1]: Stopping Open-iSCSI...
Jan 22 08:53:30 np0005592157 iscsid[222394]: iscsid shutting down.
Jan 22 08:53:30 np0005592157 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 08:53:30 np0005592157 systemd[1]: Stopped Open-iSCSI.
Jan 22 08:53:30 np0005592157 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:53:30 np0005592157 systemd[1]: Starting Open-iSCSI...
Jan 22 08:53:30 np0005592157 systemd[1]: Started Open-iSCSI.
Jan 22 08:53:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:30 np0005592157 python3.9[230047]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:30 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592157 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 08:53:31 np0005592157 multipathd[226731]: exit (signal)
Jan 22 08:53:31 np0005592157 multipathd[226731]: --------shut down-------
Jan 22 08:53:31 np0005592157 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 08:53:31 np0005592157 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 08:53:31 np0005592157 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:31 np0005592157 multipathd[230053]: --------start up--------
Jan 22 08:53:31 np0005592157 multipathd[230053]: read /etc/multipath.conf
Jan 22 08:53:31 np0005592157 multipathd[230053]: path checkers start up
Jan 22 08:53:31 np0005592157 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:32 np0005592157 python3.9[230211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:53:32 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:53:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:53:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:33 np0005592157 python3.9[230367]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:33 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:34 np0005592157 python3.9[230520]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:53:34 np0005592157 systemd[1]: Reloading.
Jan 22 08:53:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:34 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:34 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:35 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:35 np0005592157 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 08:53:35 np0005592157 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 08:53:35 np0005592157 python3.9[230708]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:53:35 np0005592157 network[230725]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:53:35 np0005592157 network[230726]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:53:35 np0005592157 network[230727]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:53:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:36.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:37 np0005592157 podman[230748]: 2026-01-22 13:53:37.262337196 +0000 UTC m=+0.131716872 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 22 08:53:37 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:37 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:38.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:38 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:39 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:39 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:40.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:53:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:53:40 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:42 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:42.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:42.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:42 np0005592157 python3.9[231079]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:43 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:43 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:43 np0005592157 python3.9[231233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:44.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:44 np0005592157 python3.9[231386]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:45 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:45 np0005592157 python3.9[231540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:53:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:46.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:46 np0005592157 python3.9[231693]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:47 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:53:47
Jan 22 08:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'vms']
Jan 22 08:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:53:47.561 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:53:47.562 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:53:47.563 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:53:47 np0005592157 python3.9[231846]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:48 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:48 np0005592157 python3.9[232000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:49 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:49 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:49 np0005592157 python3.9[232153]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:50 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:51 np0005592157 python3.9[232307]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:51 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:51 np0005592157 python3.9[232460]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:52 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 08:53:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 08:53:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:52 np0005592157 python3.9[232612]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:53 np0005592157 podman[232736]: 2026-01-22 13:53:53.378687057 +0000 UTC m=+0.103144409 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:53:53 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:53 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:53 np0005592157 python3.9[232781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:54 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:54 np0005592157 python3.9[232937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:53:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:54.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:53:55 np0005592157 python3.9[233089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:55 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:55 np0005592157 python3.9[233242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:56 np0005592157 python3.9[233394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:57 np0005592157 python3.9[233546]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:53:58 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:58 np0005592157 python3.9[233699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:53:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:58.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:53:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:53:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:58.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:59 np0005592157 python3.9[233851]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:59 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:59 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592157 python3.9[234004]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:00 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:54:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:54:00 np0005592157 python3.9[234206]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:01 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:01 np0005592157 python3.9[234358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:02 np0005592157 python3.9[234513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:02.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:02 np0005592157 python3.9[234665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:54:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:54:04 np0005592157 python3.9[234818]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:04 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:04 np0005592157 python3.9[234970]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:54:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:06 np0005592157 python3.9[235123]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:54:06 np0005592157 systemd[1]: Reloading.
Jan 22 08:54:06 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:54:06 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:54:06 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:07 np0005592157 python3.9[235310]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:07 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:07 np0005592157 podman[235312]: 2026-01-22 13:54:07.470462139 +0000 UTC m=+0.132285607 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:54:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:08 np0005592157 python3.9[235490]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:08 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:08 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:08 np0005592157 python3.9[235643]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:09 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:09 np0005592157 python3.9[235797]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:10 np0005592157 python3.9[235950]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:10 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:11 np0005592157 python3.9[236103]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:11 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:11 np0005592157 python3.9[236257]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:12 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:12.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:12 np0005592157 python3.9[236410]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.325654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053326034, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1518, "num_deletes": 256, "total_data_size": 2148505, "memory_usage": 2185408, "flush_reason": "Manual Compaction"}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053351279, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2115001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18652, "largest_seqno": 20169, "table_properties": {"data_size": 2108467, "index_size": 3414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16221, "raw_average_key_size": 20, "raw_value_size": 2094087, "raw_average_value_size": 2620, "num_data_blocks": 150, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089938, "oldest_key_time": 1769089938, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 25633 microseconds, and 11133 cpu microseconds.
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.351396) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2115001 bytes OK
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.351434) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.353513) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.353589) EVENT_LOG_v1 {"time_micros": 1769090053353577, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.353620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2141763, prev total WAL file size 2141763, number of live WAL files 2.
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.354670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2065KB)], [41(7562KB)]
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053354768, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9858972, "oldest_snapshot_seqno": -1}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5464 keys, 9664129 bytes, temperature: kUnknown
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053429683, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9664129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9627821, "index_size": 21542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 140995, "raw_average_key_size": 25, "raw_value_size": 9528633, "raw_average_value_size": 1743, "num_data_blocks": 864, "num_entries": 5464, "num_filter_entries": 5464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.430054) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9664129 bytes
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.431306) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.5 rd, 128.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 5991, records dropped: 527 output_compression: NoCompression
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.431324) EVENT_LOG_v1 {"time_micros": 1769090053431316, "job": 20, "event": "compaction_finished", "compaction_time_micros": 74996, "compaction_time_cpu_micros": 27680, "output_level": 6, "num_output_files": 1, "total_output_size": 9664129, "num_input_records": 5991, "num_output_records": 5464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053431748, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053432897, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.354533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.433022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.433029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.433033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.433036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:13.433038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:13 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:14 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:14 np0005592157 python3.9[236564]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:15 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:15 np0005592157 python3.9[236717]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:16 np0005592157 python3.9[236869]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:16 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:16.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:17 np0005592157 python3.9[237021]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:17 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:17 np0005592157 python3.9[237174]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:18 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:18 np0005592157 python3.9[237326]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:19 np0005592157 python3.9[237478]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:19 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:20 np0005592157 python3.9[237668]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:20 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:20.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:20 np0005592157 python3.9[237833]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:21 np0005592157 python3.9[237985]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:22 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:23 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:23 np0005592157 podman[238036]: 2026-01-22 13:54:23.872844697 +0000 UTC m=+0.061735894 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 08:54:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2c1334a1-6c62-4318-b31c-e9918da48ca4 does not exist
Jan 22 08:54:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 06bcb18d-0347-4bf1-a670-c3097efee65f does not exist
Jan 22 08:54:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5bfb64e5-ea67-4b1f-8746-eee365bc3a14 does not exist
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:54:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.406557199 +0000 UTC m=+0.062746819 container create 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:54:26 np0005592157 systemd[1]: Started libpod-conmon-33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc.scope.
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.381984975 +0000 UTC m=+0.038174675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.519150453 +0000 UTC m=+0.175340173 container init 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.532328242 +0000 UTC m=+0.188517902 container start 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.537204124 +0000 UTC m=+0.193393834 container attach 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:54:26 np0005592157 amazing_elbakyan[238437]: 167 167
Jan 22 08:54:26 np0005592157 systemd[1]: libpod-33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc.scope: Deactivated successfully.
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.543986513 +0000 UTC m=+0.200176163 container died 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:54:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a82b1b8205b03fdeef327b9e02d0afef6e730b053be1a518e7df74cf7e0276dc-merged.mount: Deactivated successfully.
Jan 22 08:54:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:26 np0005592157 podman[238421]: 2026-01-22 13:54:26.597592023 +0000 UTC m=+0.253781653 container remove 33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:54:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:26 np0005592157 systemd[1]: libpod-conmon-33ed19b78b83dd6b695a04f2dec8f2f74a15073bfb101765dc5ab9b8b07bdcbc.scope: Deactivated successfully.
Jan 22 08:54:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:26 np0005592157 podman[238460]: 2026-01-22 13:54:26.842552195 +0000 UTC m=+0.061210151 container create f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:54:26 np0005592157 systemd[1]: Started libpod-conmon-f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963.scope.
Jan 22 08:54:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:26 np0005592157 podman[238460]: 2026-01-22 13:54:26.825261313 +0000 UTC m=+0.043919289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:26 np0005592157 podman[238460]: 2026-01-22 13:54:26.947303453 +0000 UTC m=+0.165961439 container init f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:54:26 np0005592157 podman[238460]: 2026-01-22 13:54:26.961161089 +0000 UTC m=+0.179819075 container start f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:54:26 np0005592157 podman[238460]: 2026-01-22 13:54:26.966350019 +0000 UTC m=+0.185008005 container attach f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:54:27 np0005592157 python3.9[238609]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 08:54:27 np0005592157 hopeful_vaughan[238476]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:54:27 np0005592157 hopeful_vaughan[238476]: --> relative data size: 1.0
Jan 22 08:54:27 np0005592157 hopeful_vaughan[238476]: --> All data devices are unavailable
Jan 22 08:54:27 np0005592157 systemd[1]: libpod-f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963.scope: Deactivated successfully.
Jan 22 08:54:27 np0005592157 podman[238460]: 2026-01-22 13:54:27.837436568 +0000 UTC m=+1.056094534 container died f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:54:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8e89e545f2a8f41e706644079db762c043de77402f4b7fa6fb873a9c932ed44c-merged.mount: Deactivated successfully.
Jan 22 08:54:27 np0005592157 podman[238460]: 2026-01-22 13:54:27.902658968 +0000 UTC m=+1.121316964 container remove f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:54:27 np0005592157 systemd[1]: libpod-conmon-f3b721b79e0fd494a37974e92090a7de456cab212ef5a37e8e590595ec7cc963.scope: Deactivated successfully.
Jan 22 08:54:27 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.871180513 +0000 UTC m=+0.043004336 container create 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 08:54:28 np0005592157 systemd[1]: Started libpod-conmon-145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae.scope.
Jan 22 08:54:28 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.850496986 +0000 UTC m=+0.022320809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.961310485 +0000 UTC m=+0.133134328 container init 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.969260704 +0000 UTC m=+0.141084517 container start 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.972750831 +0000 UTC m=+0.144574644 container attach 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:54:28 np0005592157 heuristic_wozniak[238941]: 167 167
Jan 22 08:54:28 np0005592157 systemd[1]: libpod-145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae.scope: Deactivated successfully.
Jan 22 08:54:28 np0005592157 podman[238923]: 2026-01-22 13:54:28.976001112 +0000 UTC m=+0.147824925 container died 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:54:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-733e29da22890014f8336d5d5252c963e57f2e4e3ee15e0c10681bf7c1ebc902-merged.mount: Deactivated successfully.
Jan 22 08:54:29 np0005592157 podman[238923]: 2026-01-22 13:54:29.017827498 +0000 UTC m=+0.189651311 container remove 145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:54:29 np0005592157 python3.9[238925]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:54:29 np0005592157 systemd[1]: libpod-conmon-145d7d341d5acd0f0f42acfab940d8cef49311fc8bfa8ade8c159052d059cbae.scope: Deactivated successfully.
Jan 22 08:54:29 np0005592157 podman[238970]: 2026-01-22 13:54:29.19795941 +0000 UTC m=+0.044969535 container create 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:54:29 np0005592157 systemd[1]: Started libpod-conmon-389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b.scope.
Jan 22 08:54:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:29 np0005592157 podman[238970]: 2026-01-22 13:54:29.181769845 +0000 UTC m=+0.028779980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4765440e35d6cbddc31f89078f96fcdfbf6a2fb33f4c42f5e8e3f2a913cd9d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4765440e35d6cbddc31f89078f96fcdfbf6a2fb33f4c42f5e8e3f2a913cd9d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4765440e35d6cbddc31f89078f96fcdfbf6a2fb33f4c42f5e8e3f2a913cd9d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4765440e35d6cbddc31f89078f96fcdfbf6a2fb33f4c42f5e8e3f2a913cd9d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:29 np0005592157 podman[238970]: 2026-01-22 13:54:29.29520386 +0000 UTC m=+0.142214035 container init 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:54:29 np0005592157 podman[238970]: 2026-01-22 13:54:29.311053486 +0000 UTC m=+0.158063611 container start 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:54:29 np0005592157 podman[238970]: 2026-01-22 13:54:29.315011935 +0000 UTC m=+0.162022060 container attach 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:54:29 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]: {
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:    "0": [
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:        {
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "devices": [
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "/dev/loop3"
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            ],
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "lv_name": "ceph_lv0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "lv_size": "7511998464",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "name": "ceph_lv0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "tags": {
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.cluster_name": "ceph",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.crush_device_class": "",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.encrypted": "0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.osd_id": "0",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.type": "block",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:                "ceph.vdo": "0"
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            },
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "type": "block",
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:            "vg_name": "ceph_vg0"
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:        }
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]:    ]
Jan 22 08:54:30 np0005592157 wonderful_tesla[239010]: }
Jan 22 08:54:30 np0005592157 systemd[1]: libpod-389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b.scope: Deactivated successfully.
Jan 22 08:54:30 np0005592157 podman[238970]: 2026-01-22 13:54:30.171053539 +0000 UTC m=+1.018063654 container died 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:54:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4765440e35d6cbddc31f89078f96fcdfbf6a2fb33f4c42f5e8e3f2a913cd9d47-merged.mount: Deactivated successfully.
Jan 22 08:54:30 np0005592157 podman[238970]: 2026-01-22 13:54:30.255039588 +0000 UTC m=+1.102049723 container remove 389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:54:30 np0005592157 systemd[1]: libpod-conmon-389d7adb57041fc0c6b7dbb68e5328aad66f1df9574107b495e57b3bb4be707b.scope: Deactivated successfully.
Jan 22 08:54:30 np0005592157 python3.9[239145]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:54:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:30.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:30 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:30 np0005592157 podman[239334]: 2026-01-22 13:54:30.960343773 +0000 UTC m=+0.044822041 container create 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 08:54:31 np0005592157 systemd[1]: Started libpod-conmon-28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e.scope.
Jan 22 08:54:31 np0005592157 podman[239334]: 2026-01-22 13:54:30.938373694 +0000 UTC m=+0.022851962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:31 np0005592157 podman[239334]: 2026-01-22 13:54:31.055232995 +0000 UTC m=+0.139711253 container init 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:54:31 np0005592157 podman[239334]: 2026-01-22 13:54:31.062543598 +0000 UTC m=+0.147021856 container start 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:54:31 np0005592157 podman[239334]: 2026-01-22 13:54:31.06665934 +0000 UTC m=+0.151137668 container attach 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:54:31 np0005592157 cool_hypatia[239350]: 167 167
Jan 22 08:54:31 np0005592157 systemd[1]: libpod-28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e.scope: Deactivated successfully.
Jan 22 08:54:31 np0005592157 podman[239355]: 2026-01-22 13:54:31.108441915 +0000 UTC m=+0.027918809 container died 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:54:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9fc2c18a273121346e2fd0622337184d07732decd6fb86c004bf020f0d534ba9-merged.mount: Deactivated successfully.
Jan 22 08:54:31 np0005592157 podman[239355]: 2026-01-22 13:54:31.153195613 +0000 UTC m=+0.072672467 container remove 28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 08:54:31 np0005592157 systemd[1]: libpod-conmon-28e38a3bfd745d7c06454d792e3b350a86364e762e13d02c19440640c183254e.scope: Deactivated successfully.
Jan 22 08:54:31 np0005592157 podman[239377]: 2026-01-22 13:54:31.419084198 +0000 UTC m=+0.074857592 container create 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:54:31 np0005592157 systemd[1]: Started libpod-conmon-9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a.scope.
Jan 22 08:54:31 np0005592157 podman[239377]: 2026-01-22 13:54:31.391483128 +0000 UTC m=+0.047256602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:54:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:54:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8435 writes, 33K keys, 8435 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8435 writes, 1742 syncs, 4.84 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 552 writes, 846 keys, 552 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 552 writes, 264 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000161 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000161 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Jan 22 08:54:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:54:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be55567dff64704f401a48f974c29ddcd91f6f556138b0af9fe7b8d54a4f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be55567dff64704f401a48f974c29ddcd91f6f556138b0af9fe7b8d54a4f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be55567dff64704f401a48f974c29ddcd91f6f556138b0af9fe7b8d54a4f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be55567dff64704f401a48f974c29ddcd91f6f556138b0af9fe7b8d54a4f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:54:31 np0005592157 podman[239377]: 2026-01-22 13:54:31.546147814 +0000 UTC m=+0.201921278 container init 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:54:31 np0005592157 podman[239377]: 2026-01-22 13:54:31.561818095 +0000 UTC m=+0.217591519 container start 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:54:31 np0005592157 podman[239377]: 2026-01-22 13:54:31.565829386 +0000 UTC m=+0.221602820 container attach 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:54:31 np0005592157 systemd-logind[785]: New session 51 of user zuul.
Jan 22 08:54:31 np0005592157 systemd[1]: Started Session 51 of User zuul.
Jan 22 08:54:31 np0005592157 systemd[1]: session-51.scope: Deactivated successfully.
Jan 22 08:54:31 np0005592157 systemd-logind[785]: Session 51 logged out. Waiting for processes to exit.
Jan 22 08:54:31 np0005592157 systemd-logind[785]: Removed session 51.
Jan 22 08:54:31 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]: {
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:        "osd_id": 0,
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:        "type": "bluestore"
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]:    }
Jan 22 08:54:32 np0005592157 exciting_torvalds[239394]: }
Jan 22 08:54:32 np0005592157 systemd[1]: libpod-9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a.scope: Deactivated successfully.
Jan 22 08:54:32 np0005592157 podman[239377]: 2026-01-22 13:54:32.536889754 +0000 UTC m=+1.192663158 container died 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:54:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-68be55567dff64704f401a48f974c29ddcd91f6f556138b0af9fe7b8d54a4f97-merged.mount: Deactivated successfully.
Jan 22 08:54:32 np0005592157 podman[239377]: 2026-01-22 13:54:32.598200426 +0000 UTC m=+1.253973810 container remove 9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_torvalds, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:54:32 np0005592157 systemd[1]: libpod-conmon-9cbec11635512d53a14569789c271c931d103c11777ce3cbbd45060be38d6b4a.scope: Deactivated successfully.
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 08:54:32 np0005592157 python3.9[239565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ed6b7d96-9044-4b1b-b3bb-fe786a6b9ec9 does not exist
Jan 22 08:54:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1b2be3d9-c950-4e62-a53a-6d18fc5ab6f3 does not exist
Jan 22 08:54:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f35af5ca-0eea-4179-8415-fc4931798955 does not exist
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.921582) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072921658, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 515, "num_deletes": 251, "total_data_size": 432720, "memory_usage": 443688, "flush_reason": "Manual Compaction"}
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072927203, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20170, "largest_seqno": 20684, "table_properties": {"data_size": 413181, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7315, "raw_average_key_size": 19, "raw_value_size": 407344, "raw_average_value_size": 1092, "num_data_blocks": 33, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090053, "oldest_key_time": 1769090053, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5663 microseconds, and 2283 cpu microseconds.
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.927256) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 415944 bytes OK
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.927273) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.929287) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.929305) EVENT_LOG_v1 {"time_micros": 1769090072929300, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.929327) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 429728, prev total WAL file size 429728, number of live WAL files 2.
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.929746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(406KB)], [44(9437KB)]
Jan 22 08:54:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072929804, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 10080073, "oldest_snapshot_seqno": -1}
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5322 keys, 8372082 bytes, temperature: kUnknown
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073012846, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8372082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8337727, "index_size": 19973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 138848, "raw_average_key_size": 26, "raw_value_size": 8241814, "raw_average_value_size": 1548, "num_data_blocks": 796, "num_entries": 5322, "num_filter_entries": 5322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.013621) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8372082 bytes
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.016143) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.1 rd, 100.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(44.4) write-amplify(20.1) OK, records in: 5837, records dropped: 515 output_compression: NoCompression
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.016166) EVENT_LOG_v1 {"time_micros": 1769090073016156, "job": 22, "event": "compaction_finished", "compaction_time_micros": 83211, "compaction_time_cpu_micros": 39068, "output_level": 6, "num_output_files": 1, "total_output_size": 8372082, "num_input_records": 5837, "num_output_records": 5322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073016738, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073018964, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:32.929637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.019087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.019092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.019094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.019096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-13:54:33.019098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592157 python3.9[239751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090072.1154509-2659-240716384851792/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:34 np0005592157 python3.9[239902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:34 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:34 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:34 np0005592157 python3.9[239978]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:35 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:35 np0005592157 python3.9[240128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:36 np0005592157 python3.9[240250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090074.7320635-2659-195238627344827/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:36 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:36 np0005592157 python3.9[240400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:37 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:37 np0005592157 python3.9[240521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090076.1980739-2659-102109592578031/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:37 np0005592157 podman[240646]: 2026-01-22 13:54:37.853673906 +0000 UTC m=+0.109417134 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 22 08:54:37 np0005592157 python3.9[240681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:38 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:38 np0005592157 python3.9[240820]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090077.4207888-2659-175862948361670/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:38.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:39 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:39 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:39 np0005592157 python3.9[240970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:39 np0005592157 python3.9[241092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090078.952173-2659-58755303495445/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:40 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:40 np0005592157 python3.9[241294]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:41 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:41 np0005592157 python3.9[241447]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:42 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:42 np0005592157 python3.9[241599]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:42.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:43 np0005592157 python3.9[241751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:43 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:43 np0005592157 python3.9[241875]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769090082.587551-2980-278243926099539/.source _original_basename=.5xa5uyxm follow=False checksum=897ab40dbf9d8babbf11df8e51265f4e3dd7ed90 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 08:54:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 08:54:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 08:54:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 08:54:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:54:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:54:44 np0005592157 python3.9[242029]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:45 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:45 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:45 np0005592157 python3.9[242182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:46 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592157 python3.9[242303]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090085.2404723-3058-56929465637306/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:54:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:47 np0005592157 python3.9[242453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_13:54:47
Jan 22 08:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 08:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 08:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control']
Jan 22 08:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 08:54:47 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:54:47.562 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:54:47.563 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 13:54:47.564 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:54:47 np0005592157 python3.9[242575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090086.5383985-3103-129437280320175/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:48 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:48 np0005592157 python3.9[242727]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 08:54:49 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:50 np0005592157 python3.9[242880]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:54:50 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:51 np0005592157 python3[243032]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:54:51 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:52.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:52 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:54 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:54 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:54.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:55 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2586f0 =====
Jan 22 08:54:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2586f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2586f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:57 np0005592157 podman[243089]: 2026-01-22 13:54:57.161736692 +0000 UTC m=+2.895504534 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:54:57 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:57 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:54:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:54:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2586f0 =====
Jan 22 08:54:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:58.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2586f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:54:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2586f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:58.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:54:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:00.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:01 np0005592157 podman[243047]: 2026-01-22 13:55:01.846489321 +0000 UTC m=+10.389859269 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:02 np0005592157 podman[243232]: 2026-01-22 13:55:02.018559621 +0000 UTC m=+0.062660537 container create 3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:55:02 np0005592157 podman[243232]: 2026-01-22 13:55:01.984306065 +0000 UTC m=+0.028406991 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:02 np0005592157 python3[243032]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 08:55:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:02.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 08:55:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 08:55:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:04.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:04 np0005592157 python3.9[243423]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:06 np0005592157 python3.9[243578]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 08:55:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:06.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:06 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:07 np0005592157 python3.9[243730]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:55:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:08 np0005592157 podman[243855]: 2026-01-22 13:55:08.319774617 +0000 UTC m=+0.187576719 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:08 np0005592157 python3[243902]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:55:08 np0005592157 podman[243948]: 2026-01-22 13:55:08.78322631 +0000 UTC m=+0.087733194 container create 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 22 08:55:08 np0005592157 podman[243948]: 2026-01-22 13:55:08.741465576 +0000 UTC m=+0.045972550 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:08 np0005592157 python3[243902]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 22 08:55:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:08.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:08 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:08 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:09 np0005592157 python3.9[244139]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:09 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:10 np0005592157 python3.9[244293]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:11 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:11 np0005592157 python3.9[244445]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769090110.9758081-3391-72440565044330/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:12 np0005592157 python3.9[244521]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:55:12 np0005592157 systemd[1]: Reloading.
Jan 22 08:55:12 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:12 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:12.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:13 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:13 np0005592157 python3.9[244632]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:55:13 np0005592157 systemd[1]: Reloading.
Jan 22 08:55:13 np0005592157 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:13 np0005592157 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:13 np0005592157 systemd[1]: Starting nova_compute container...
Jan 22 08:55:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:55:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:13 np0005592157 podman[244673]: 2026-01-22 13:55:13.816404594 +0000 UTC m=+0.133960799 container init 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:13 np0005592157 podman[244673]: 2026-01-22 13:55:13.826419554 +0000 UTC m=+0.143975739 container start 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:13 np0005592157 podman[244673]: nova_compute
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + sudo -E kolla_set_configs
Jan 22 08:55:13 np0005592157 systemd[1]: Started nova_compute container.
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Validating config file
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying service configuration files
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Writing out command to execute
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592157 nova_compute[244685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592157 nova_compute[244685]: ++ cat /run_command
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + CMD=nova-compute
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + ARGS=
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + sudo kolla_copy_cacerts
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + [[ ! -n '' ]]
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + . kolla_extend_start
Jan 22 08:55:13 np0005592157 nova_compute[244685]: Running command: 'nova-compute'
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + umask 0022
Jan 22 08:55:13 np0005592157 nova_compute[244685]: + exec nova-compute
Jan 22 08:55:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:14 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:14 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:14.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:15 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592157 python3.9[244850]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:16 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:16 np0005592157 python3.9[245001]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.398 244692 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.399 244692 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.399 244692 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.399 244692 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 08:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.554 244692 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.584 244692 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:16 np0005592157 nova_compute[244685]: 2026-01-22 13:55:16.585 244692 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:55:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:55:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.134 244692 INFO nova.virt.driver [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:17 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.298 244692 INFO nova.compute.provider_config [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.315 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.316 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.316 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.316 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.316 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.317 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.318 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.319 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.320 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.321 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.322 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.323 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.324 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.325 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.326 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.327 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 python3.9[245155]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.328 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.329 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.329 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.329 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.329 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.329 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.330 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.331 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.332 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.333 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.334 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.335 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.336 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.337 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.338 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.339 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.340 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.341 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.342 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.343 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.344 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.345 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.346 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.346 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.346 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.346 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.346 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.347 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.348 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.349 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.350 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.351 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.352 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.353 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.354 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.355 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.356 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.357 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.358 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.359 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.360 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.361 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.362 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.363 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.364 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.365 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.366 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.367 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.368 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.368 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.368 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.368 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.368 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.369 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.370 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.371 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.372 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.373 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.374 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.375 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.376 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.377 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.378 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.379 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.380 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.381 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.382 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.383 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.384 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.385 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.386 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.387 244692 WARNING oslo_config.cfg [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:17 np0005592157 nova_compute[244685]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:17 np0005592157 nova_compute[244685]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:17 np0005592157 nova_compute[244685]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:17 np0005592157 nova_compute[244685]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.388 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.389 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.390 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.391 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.392 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.393 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.394 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.395 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.396 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.397 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.398 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.399 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.400 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.401 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.402 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.403 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.404 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.405 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.406 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.407 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.408 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.409 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.410 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.411 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.412 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.413 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.414 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.415 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.416 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.417 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.418 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.419 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.420 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.421 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.422 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.423 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.424 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.425 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.426 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.427 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.428 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.429 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.430 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.431 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.432 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.433 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.434 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.435 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.436 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.437 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.438 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.439 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.440 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.441 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.442 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.443 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.444 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.445 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.446 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.447 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.448 244692 DEBUG oslo_service.service [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.450 244692 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.482 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.483 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.483 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.483 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:17 np0005592157 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:55:17 np0005592157 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.587 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3a8d5ebdc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.590 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3a8d5ebdc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.591 244692 INFO nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.605 244692 WARNING nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:17 np0005592157 nova_compute[244685]: 2026-01-22 13:55:17.605 244692 DEBUG nova.virt.libvirt.volume.mount [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:18 np0005592157 python3.9[245368]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.605 244692 INFO nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 
Jan 22 08:55:18 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <host>
Jan 22 08:55:18 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <uuid>f2612c2e-5bb2-49d6-9db0-33d2b0e700a7</uuid>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <arch>x86_64</arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <microcode version='16777317'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='x2apic'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='tsc-deadline'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='osxsave'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='hypervisor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='tsc_adjust'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='spec-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='stibp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='arch-capabilities'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='cmp_legacy'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='topoext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='virt-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='lbrv'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='tsc-scale'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='vmcb-clean'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='pause-filter'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='pfthreshold'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='rdctl-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='mds-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <power_management>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <suspend_mem/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </power_management>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <iommu support='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <migration_features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <live/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <uri_transports>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </uri_transports>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </migration_features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <topology>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <cells num='1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <cell id='0'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <distances>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <sibling id='0' value='10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          </distances>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          <cpus num='8'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:          </cpus>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        </cell>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </cells>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </topology>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <cache>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </cache>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <secmodel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model>selinux</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <doi>0</doi>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </secmodel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <secmodel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model>dac</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <doi>0</doi>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </secmodel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </host>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <guest>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <arch name='i686'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <wordsize>32</wordsize>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <pae/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <nonpae/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <cpuselection/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <deviceboot/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </guest>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <guest>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <arch name='x86_64'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <wordsize>64</wordsize>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <cpuselection/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <deviceboot/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </guest>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </capabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: #033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.612 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.647 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:18 np0005592157 nova_compute[244685]: <domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <os supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>rom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pflash</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>yes</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='secure'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </loader>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </os>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>memfd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </memoryBacking>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>disk</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>floppy</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>lun</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>fdc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>sata</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </disk>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vnc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </graphics>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <video supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vga</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>none</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>bochs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </video>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='mode'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>requisite</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>optional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pci</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hostdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>random</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </rng>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>path</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>handle</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </filesystem>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emulator</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>external</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>2.0</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </tpm>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </redirdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </channel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </crypto>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>passt</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </interface>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>isa</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </panic>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <console supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>null</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dev</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pipe</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stdio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>udp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tcp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </console>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='features'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vapic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>runtime</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>synic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stimer</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reset</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ipi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>avic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hyperv>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.656 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:18 np0005592157 nova_compute[244685]: <domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <os supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>rom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pflash</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>yes</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='secure'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </loader>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </os>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>memfd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </memoryBacking>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>disk</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>floppy</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>lun</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ide</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>fdc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>sata</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </disk>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vnc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </graphics>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <video supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vga</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>none</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>bochs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </video>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='mode'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>requisite</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>optional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pci</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hostdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>random</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </rng>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>path</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>handle</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </filesystem>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emulator</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>external</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>2.0</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </tpm>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </redirdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </channel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </crypto>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>passt</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </interface>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>isa</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </panic>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <console supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>null</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dev</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pipe</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stdio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>udp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tcp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </console>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='features'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vapic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>runtime</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>synic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stimer</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reset</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ipi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>avic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hyperv>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.729 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.734 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:18 np0005592157 nova_compute[244685]: <domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <os supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='firmware'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>efi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>rom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pflash</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>yes</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='secure'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>yes</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </loader>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </os>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:18.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>memfd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </memoryBacking>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>disk</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>floppy</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>lun</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>fdc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>sata</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </disk>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vnc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </graphics>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <video supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vga</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>none</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>bochs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </video>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='mode'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>requisite</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>optional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pci</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hostdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>random</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </rng>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>path</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>handle</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </filesystem>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emulator</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>external</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>2.0</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </tpm>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </redirdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </channel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </crypto>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>passt</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </interface>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>isa</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </panic>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <console supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>null</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dev</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pipe</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stdio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>udp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tcp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </console>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='features'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vapic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>runtime</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>synic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stimer</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reset</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ipi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>avic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hyperv>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.802 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:18 np0005592157 nova_compute[244685]: <domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <os supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>rom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pflash</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>yes</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='secure'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>no</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </loader>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </os>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>on</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>off</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </blockers>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </mode>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <value>memfd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </memoryBacking>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>disk</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>floppy</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>lun</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ide</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>fdc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>sata</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </disk>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vnc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </graphics>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <video supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vga</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>none</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>bochs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </video>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='mode'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>requisite</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>optional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pci</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>scsi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hostdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>random</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>egd</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </rng>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>path</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>handle</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </filesystem>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emulator</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>external</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>2.0</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </tpm>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='bus'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>usb</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </redirdev>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </channel>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>builtin</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </crypto>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>default</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>passt</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </interface>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='model'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>isa</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </panic>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <console supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='type'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>null</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vc</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pty</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dev</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>file</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>pipe</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stdio</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>udp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tcp</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>unix</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>dbus</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </console>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </devices>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <enum name='features'>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vapic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>runtime</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>synic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>stimer</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reset</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>ipi</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>avic</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </enum>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      <defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:      </defaults>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    </hyperv>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  </features>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </domainCapabilities>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.890 244692 DEBUG nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.891 244692 INFO nova.virt.libvirt.host [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Secure Boot support detected#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.893 244692 INFO nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.894 244692 INFO nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.907 244692 DEBUG nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 08:55:18 np0005592157 nova_compute[244685]:  <model>Nehalem</model>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: </cpu>
Jan 22 08:55:18 np0005592157 nova_compute[244685]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.910 244692 DEBUG nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.938 244692 INFO nova.virt.node [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Determined node identity 25bab4de-b201-44ab-9630-4373ed73bbb5 from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.956 244692 WARNING nova.compute.manager [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Compute nodes ['25bab4de-b201-44ab-9630-4373ed73bbb5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 22 08:55:18 np0005592157 nova_compute[244685]: 2026-01-22 13:55:18.983 244692 INFO nova.compute.manager [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.017 244692 WARNING nova.compute.manager [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.017 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.018 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.018 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.018 244692 DEBUG nova.compute.resource_tracker [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.019 244692 DEBUG oslo_concurrency.processutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079402314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.436 244692 DEBUG oslo_concurrency.processutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:19 np0005592157 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:55:19 np0005592157 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:55:19 np0005592157 python3.9[245566]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:55:19 np0005592157 systemd[1]: Stopping nova_compute container...
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.745 244692 WARNING nova.virt.libvirt.driver [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.746 244692 DEBUG nova.compute.resource_tracker [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5198MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.746 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.746 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.793 244692 DEBUG oslo_concurrency.lockutils [None req-18d76ce8-6b3a-4cae-a32e-50401157caca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.793 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.794 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:19 np0005592157 nova_compute[244685]: 2026-01-22 13:55:19.794 244692 DEBUG oslo_concurrency.lockutils [None req-bd88c2d0-1b19-4cf7-a7c7-58dd0488b880 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:20 np0005592157 systemd[1]: libpod-563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566.scope: Deactivated successfully.
Jan 22 08:55:20 np0005592157 virtqemud[245202]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 08:55:20 np0005592157 systemd[1]: libpod-563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566.scope: Consumed 4.134s CPU time.
Jan 22 08:55:20 np0005592157 virtqemud[245202]: hostname: compute-0
Jan 22 08:55:20 np0005592157 virtqemud[245202]: End of file while reading data: Input/output error
Jan 22 08:55:20 np0005592157 podman[245596]: 2026-01-22 13:55:20.325599619 +0000 UTC m=+0.573691569 container died 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 08:55:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f-merged.mount: Deactivated successfully.
Jan 22 08:55:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:20.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:21 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:21 np0005592157 podman[245596]: 2026-01-22 13:55:21.732258023 +0000 UTC m=+1.980349963 container cleanup 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 08:55:21 np0005592157 podman[245596]: nova_compute
Jan 22 08:55:21 np0005592157 podman[245679]: nova_compute
Jan 22 08:55:21 np0005592157 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 08:55:21 np0005592157 systemd[1]: Stopped nova_compute container.
Jan 22 08:55:21 np0005592157 systemd[1]: Starting nova_compute container...
Jan 22 08:55:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:55:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd6248770acd8902eeb28257da25d35bedb144e23bf3d93b9706b21a2c2e00f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:21 np0005592157 podman[245692]: 2026-01-22 13:55:21.94143464 +0000 UTC m=+0.112254446 container init 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 08:55:21 np0005592157 podman[245692]: 2026-01-22 13:55:21.953654606 +0000 UTC m=+0.124474392 container start 563efb0ad375ad23bb2f178769f6c3c16a99c286e79aa12fabd07e9892aae566 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 08:55:21 np0005592157 podman[245692]: nova_compute
Jan 22 08:55:21 np0005592157 nova_compute[245707]: + sudo -E kolla_set_configs
Jan 22 08:55:21 np0005592157 systemd[1]: Started nova_compute container.
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Validating config file
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying service configuration files
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Writing out command to execute
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592157 nova_compute[245707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592157 nova_compute[245707]: ++ cat /run_command
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + CMD=nova-compute
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + ARGS=
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + sudo kolla_copy_cacerts
Jan 22 08:55:22 np0005592157 nova_compute[245707]: Running command: 'nova-compute'
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + [[ ! -n '' ]]
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + . kolla_extend_start
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + umask 0022
Jan 22 08:55:22 np0005592157 nova_compute[245707]: + exec nova-compute
Jan 22 08:55:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:22.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.062 245711 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.063 245711 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.063 245711 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.063 245711 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.214 245711 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.234 245711 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.235 245711 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:24 np0005592157 python3.9[245874]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:24 np0005592157 systemd[1]: Started libpod-conmon-3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56.scope.
Jan 22 08:55:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 08:55:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f9afd508909250b4b8558d9489ed9c3f007fae6cab62f5a2bacaa07700c699/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f9afd508909250b4b8558d9489ed9c3f007fae6cab62f5a2bacaa07700c699/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f9afd508909250b4b8558d9489ed9c3f007fae6cab62f5a2bacaa07700c699/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592157 podman[245901]: 2026-01-22 13:55:24.675122179 +0000 UTC m=+0.150722508 container init 3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init)
Jan 22 08:55:24 np0005592157 podman[245901]: 2026-01-22 13:55:24.684276727 +0000 UTC m=+0.159877016 container start 3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:24 np0005592157 python3.9[245874]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 08:55:24 np0005592157 nova_compute_init[245923]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 08:55:24 np0005592157 systemd[1]: libpod-3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56.scope: Deactivated successfully.
Jan 22 08:55:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:24.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 08:55:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 08:55:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:24.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 08:55:24 np0005592157 podman[245936]: 2026-01-22 13:55:24.831453446 +0000 UTC m=+0.031333585 container died 3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.841 245711 INFO nova.virt.driver [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-75f9afd508909250b4b8558d9489ed9c3f007fae6cab62f5a2bacaa07700c699-merged.mount: Deactivated successfully.
Jan 22 08:55:24 np0005592157 podman[245936]: 2026-01-22 13:55:24.879100486 +0000 UTC m=+0.078980655 container cleanup 3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init)
Jan 22 08:55:24 np0005592157 systemd[1]: libpod-conmon-3683fcddd174c1d8eb047ef0272e524a36e8b58095f6583bfc16c3a31a6cce56.scope: Deactivated successfully.
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.943 245711 INFO nova.compute.provider_config [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.957 245711 DEBUG oslo_concurrency.lockutils [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.957 245711 DEBUG oslo_concurrency.lockutils [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.957 245711 DEBUG oslo_concurrency.lockutils [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.958 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.959 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.960 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.961 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.961 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.961 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.961 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.961 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.962 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.962 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.962 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.962 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.962 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.963 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.963 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.963 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.963 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.963 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.964 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.964 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.964 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.964 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.964 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.965 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.966 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.967 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.968 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.969 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.970 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.971 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.972 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.973 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.974 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.975 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.976 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.976 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.976 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.976 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.976 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.977 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.978 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.979 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.980 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.981 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.982 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.983 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.984 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.985 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.986 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.987 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.988 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.989 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.990 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.991 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.992 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.993 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.994 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.995 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.996 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.997 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.998 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:24 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:24.999 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.000 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.001 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.002 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.003 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.004 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.005 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.005 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.005 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.005 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.005 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.006 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.007 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.008 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.009 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.010 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.011 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.012 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.013 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.014 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.015 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.016 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.017 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.018 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.019 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.020 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.021 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.022 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.023 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.024 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.024 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.024 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.024 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.024 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.025 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.026 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.027 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.028 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.029 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.030 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.031 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.031 245711 WARNING oslo_config.cfg [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:25 np0005592157 nova_compute[245707]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:25 np0005592157 nova_compute[245707]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:25 np0005592157 nova_compute[245707]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:25 np0005592157 nova_compute[245707]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.031 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.031 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.031 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.032 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.033 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.034 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.034 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.034 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.034 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.034 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.035 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.036 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.037 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.038 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.039 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.040 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.041 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.042 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.042 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.042 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.042 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.042 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.043 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.044 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.045 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.046 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.047 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.048 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.049 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.050 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.051 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.052 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.053 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.054 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.055 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.056 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.057 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.058 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.059 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.060 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.061 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.062 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.063 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.064 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.065 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.066 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.067 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.068 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.069 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.070 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.071 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.072 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.073 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.074 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.074 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.074 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.074 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.074 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.075 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.076 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.077 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.078 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.079 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.080 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.080 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.080 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.080 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.080 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.081 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.082 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.083 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.084 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.085 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.086 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.087 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.088 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.089 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.090 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.091 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.092 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.093 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.094 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.095 245711 DEBUG oslo_service.service [None req-316b1053-3fed-438b-a64b-54f1e2d75575 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.096 245711 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.143 245711 INFO nova.virt.node [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Determined node identity 25bab4de-b201-44ab-9630-4373ed73bbb5 from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.143 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.144 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.144 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.145 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.157 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fcba92a5a90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.159 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fcba92a5a90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.160 245711 INFO nova.virt.libvirt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:25 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.168 245711 INFO nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <host>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <uuid>f2612c2e-5bb2-49d6-9db0-33d2b0e700a7</uuid>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <arch>x86_64</arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <microcode version='16777317'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='x2apic'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='tsc-deadline'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='osxsave'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='hypervisor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='tsc_adjust'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='spec-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='stibp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='arch-capabilities'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='cmp_legacy'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='topoext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='virt-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='lbrv'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='tsc-scale'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='vmcb-clean'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='pause-filter'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='pfthreshold'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='rdctl-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='mds-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <power_management>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <suspend_mem/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </power_management>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <iommu support='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <migration_features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <live/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <uri_transports>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </uri_transports>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </migration_features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <topology>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <cells num='1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <cell id='0'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <distances>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <sibling id='0' value='10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          </distances>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          <cpus num='8'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:          </cpus>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        </cell>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </cells>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </topology>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <cache>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </cache>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <secmodel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model>selinux</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <doi>0</doi>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </secmodel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <secmodel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model>dac</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <doi>0</doi>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </secmodel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </host>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <guest>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <os_type>hvm</os_type>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <arch name='i686'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <wordsize>32</wordsize>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <domain type='qemu'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <domain type='kvm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <pae/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <nonpae/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <apic default='on' toggle='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <cpuselection/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <deviceboot/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <externalSnapshot/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </guest>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <guest>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <os_type>hvm</os_type>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <arch name='x86_64'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <wordsize>64</wordsize>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <domain type='qemu'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <domain type='kvm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <apic default='on' toggle='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <cpuselection/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <deviceboot/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <externalSnapshot/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </guest>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 
Jan 22 08:55:25 np0005592157 nova_compute[245707]: </capabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: #033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.175 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.181 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:25 np0005592157 nova_compute[245707]: <domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <arch>i686</arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <vcpu max='240'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <os supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>rom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pflash</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>yes</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='secure'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </loader>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </os>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>memfd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </memoryBacking>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>disk</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>floppy</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>lun</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ide</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>fdc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>sata</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </disk>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vnc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </graphics>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <video supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vga</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>none</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>bochs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </video>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='mode'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>requisite</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>optional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pci</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hostdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>random</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </rng>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>path</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>handle</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </filesystem>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emulator</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>external</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>2.0</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </tpm>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </redirdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </channel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </crypto>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>passt</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </interface>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>isa</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </panic>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <console supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>null</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dev</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pipe</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stdio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>udp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tcp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </console>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='features'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vapic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>runtime</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>synic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stimer</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reset</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ipi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>avic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hyperv>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: </domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.195 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:25 np0005592157 nova_compute[245707]: <domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <arch>i686</arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <vcpu max='4096'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <os supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>rom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pflash</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>yes</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='secure'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </loader>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </os>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>memfd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </memoryBacking>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>disk</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>floppy</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>lun</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>fdc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>sata</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </disk>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vnc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </graphics>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <video supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vga</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>none</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>bochs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </video>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='mode'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>requisite</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>optional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pci</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hostdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>random</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </rng>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>path</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>handle</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </filesystem>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emulator</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>external</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>2.0</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </tpm>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </redirdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </channel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </crypto>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>passt</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </interface>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>isa</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </panic>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <console supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>null</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dev</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pipe</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stdio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>udp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tcp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </console>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='features'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vapic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>runtime</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>synic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stimer</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reset</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ipi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>avic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hyperv>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: </domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.262 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.263 245711 DEBUG nova.virt.libvirt.volume.mount [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.267 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:25 np0005592157 nova_compute[245707]: <domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <arch>x86_64</arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <vcpu max='240'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <os supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>rom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pflash</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>yes</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='secure'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </loader>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </os>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>memfd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </memoryBacking>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>disk</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>floppy</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>lun</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ide</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>fdc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>sata</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </disk>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vnc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </graphics>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <video supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vga</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>none</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>bochs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </video>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='mode'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>requisite</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>optional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pci</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>scsi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hostdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>random</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>egd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </rng>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>path</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>handle</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </filesystem>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emulator</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>external</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>2.0</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </tpm>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='bus'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>usb</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </redirdev>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </channel>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>builtin</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </crypto>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>default</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>passt</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </interface>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='model'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>isa</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </panic>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <console supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>null</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vc</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pty</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dev</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>file</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pipe</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stdio</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>udp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tcp</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>unix</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>dbus</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </console>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </devices>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='features'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vapic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>runtime</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>synic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>stimer</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reset</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>ipi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>avic</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </defaults>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </hyperv>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </features>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: </domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592157 nova_compute[245707]: 2026-01-22 13:55:25.345 245711 DEBUG nova.virt.libvirt.host [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:25 np0005592157 nova_compute[245707]: <domainCapabilities>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <arch>x86_64</arch>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <vcpu max='4096'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <os supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <enum name='firmware'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>efi</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='type'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>rom</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>pflash</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>yes</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='secure'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>yes</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>no</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </loader>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  </os>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:  <cpu>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>on</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <value>off</value>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </enum>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    </mode>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      </blockers>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592157 nova_compute[245707]:        <feature name='tbm'/>
Jan 22 09:02:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:32 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:32 np0005592157 rsyslogd[1005]: imjournal: 5741 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.269 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.270 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.270 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.271 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.271 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.294 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.295 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.295 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.295 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.296 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:02:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:02:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545890113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.773 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.948 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.950 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.950 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:02:32 np0005592157 nova_compute[245707]: 2026-01-22 14:02:32.950 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.039 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.040 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.040 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.041 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.101 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:02:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:02:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840054656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.570 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.578 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.597 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.599 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:02:33 np0005592157 nova_compute[245707]: 2026-01-22 14:02:33.600 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:02:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:34 np0005592157 podman[253912]: 2026-01-22 14:02:34.41075969 +0000 UTC m=+0.133898322 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:02:34 np0005592157 podman[253912]: 2026-01-22 14:02:34.519825212 +0000 UTC m=+0.242963844 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:02:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:34 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:34 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:34 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:35 np0005592157 podman[254061]: 2026-01-22 14:02:35.373131787 +0000 UTC m=+0.154957826 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:02:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:35.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:35 np0005592157 podman[254082]: 2026-01-22 14:02:35.458254654 +0000 UTC m=+0.065630394 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:02:35 np0005592157 podman[254061]: 2026-01-22 14:02:35.630375055 +0000 UTC m=+0.412201074 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:02:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:35.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:36 np0005592157 podman[254129]: 2026-01-22 14:02:36.048091245 +0000 UTC m=+0.104327606 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, version=2.2.4, distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64)
Jan 22 09:02:36 np0005592157 podman[254150]: 2026-01-22 14:02:36.136125404 +0000 UTC m=+0.061742496 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Jan 22 09:02:36 np0005592157 podman[254129]: 2026-01-22 14:02:36.156341407 +0000 UTC m=+0.212577748 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., release=1793, architecture=x86_64, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 09:02:36 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:02:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:02:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:02:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:38 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e6a84753-f2a5-4209-8c55-1f909c65d00b does not exist
Jan 22 09:02:38 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8b39e948-d758-472c-8ff8-c74fd47c65f5 does not exist
Jan 22 09:02:38 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 43cb6e16-61a9-4b5e-accf-918803d7f38c does not exist
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:02:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:02:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:38 np0005592157 podman[254436]: 2026-01-22 14:02:38.862728022 +0000 UTC m=+0.079596131 container create 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:02:38 np0005592157 podman[254436]: 2026-01-22 14:02:38.80999697 +0000 UTC m=+0.026865099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:38 np0005592157 systemd[1]: Started libpod-conmon-28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148.scope.
Jan 22 09:02:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:39 np0005592157 podman[254436]: 2026-01-22 14:02:39.010764254 +0000 UTC m=+0.227632383 container init 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:02:39 np0005592157 podman[254436]: 2026-01-22 14:02:39.019224634 +0000 UTC m=+0.236092743 container start 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:02:39 np0005592157 confident_leavitt[254452]: 167 167
Jan 22 09:02:39 np0005592157 systemd[1]: libpod-28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148.scope: Deactivated successfully.
Jan 22 09:02:39 np0005592157 podman[254436]: 2026-01-22 14:02:39.0568579 +0000 UTC m=+0.273726029 container attach 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:02:39 np0005592157 podman[254436]: 2026-01-22 14:02:39.058153892 +0000 UTC m=+0.275022011 container died 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:02:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-58240efa3766fcc09f21a5974564370edab135c6ef4da0db9d32d94ea224623d-merged.mount: Deactivated successfully.
Jan 22 09:02:39 np0005592157 podman[254436]: 2026-01-22 14:02:39.386721434 +0000 UTC m=+0.603589543 container remove 28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:02:39 np0005592157 systemd[1]: libpod-conmon-28175fbef5f07ae409ac2e629e4541fc2255d83c2f4c7932b9e2e7fe9765a148.scope: Deactivated successfully.
Jan 22 09:02:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:39 np0005592157 podman[254477]: 2026-01-22 14:02:39.587898058 +0000 UTC m=+0.068798282 container create 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:02:39 np0005592157 podman[254477]: 2026-01-22 14:02:39.544728814 +0000 UTC m=+0.025629058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:39 np0005592157 systemd[1]: Started libpod-conmon-509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c.scope.
Jan 22 09:02:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:39 np0005592157 podman[254477]: 2026-01-22 14:02:39.746581965 +0000 UTC m=+0.227482219 container init 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:02:39 np0005592157 podman[254477]: 2026-01-22 14:02:39.75601345 +0000 UTC m=+0.236913674 container start 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:02:39 np0005592157 podman[254477]: 2026-01-22 14:02:39.809296335 +0000 UTC m=+0.290196569 container attach 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:02:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:02:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:02:40 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:40 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:40 np0005592157 priceless_spence[254493]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:02:40 np0005592157 priceless_spence[254493]: --> relative data size: 1.0
Jan 22 09:02:40 np0005592157 priceless_spence[254493]: --> All data devices are unavailable
Jan 22 09:02:40 np0005592157 systemd[1]: libpod-509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c.scope: Deactivated successfully.
Jan 22 09:02:40 np0005592157 podman[254477]: 2026-01-22 14:02:40.656285962 +0000 UTC m=+1.137186186 container died 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:02:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b06670efe18777e03fad7efec41c399ea25e3ac333bbb64d43af29ad4c77eaa7-merged.mount: Deactivated successfully.
Jan 22 09:02:40 np0005592157 podman[254477]: 2026-01-22 14:02:40.894192349 +0000 UTC m=+1.375092563 container remove 509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:02:40 np0005592157 systemd[1]: libpod-conmon-509753e7f9e51cd3a68e3ae1dc3bc8c2e669f156a96a469d61a695b1fa86046c.scope: Deactivated successfully.
Jan 22 09:02:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.609863139 +0000 UTC m=+0.070567757 container create f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.564560382 +0000 UTC m=+0.025265020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:41 np0005592157 systemd[1]: Started libpod-conmon-f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19.scope.
Jan 22 09:02:41 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.76953743 +0000 UTC m=+0.230242068 container init f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.778571525 +0000 UTC m=+0.239276143 container start f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:02:41 np0005592157 interesting_elion[254680]: 167 167
Jan 22 09:02:41 np0005592157 systemd[1]: libpod-f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19.scope: Deactivated successfully.
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.796825279 +0000 UTC m=+0.257529917 container attach f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.797973958 +0000 UTC m=+0.258678576 container died f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:02:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-93f63b7d6d817adfb6ada1eef785f646b6966c4321034adb2a39f6c4eff96cff-merged.mount: Deactivated successfully.
Jan 22 09:02:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:41.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:41 np0005592157 podman[254664]: 2026-01-22 14:02:41.998561026 +0000 UTC m=+0.459265644 container remove f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:02:42 np0005592157 systemd[1]: libpod-conmon-f5f87e7da7696f9c15cc7b4f7ba2b2de9d47ed42c50c2849c88e1bfc99e5cc19.scope: Deactivated successfully.
Jan 22 09:02:42 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:42 np0005592157 podman[254708]: 2026-01-22 14:02:42.245061908 +0000 UTC m=+0.074060343 container create 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:02:42 np0005592157 systemd[1]: Started libpod-conmon-347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27.scope.
Jan 22 09:02:42 np0005592157 podman[254708]: 2026-01-22 14:02:42.196953291 +0000 UTC m=+0.025951746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e47ff189df86826dc31237cae862175a68f42ecae59603b76492311d108747a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e47ff189df86826dc31237cae862175a68f42ecae59603b76492311d108747a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e47ff189df86826dc31237cae862175a68f42ecae59603b76492311d108747a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e47ff189df86826dc31237cae862175a68f42ecae59603b76492311d108747a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:42 np0005592157 podman[254708]: 2026-01-22 14:02:42.33360805 +0000 UTC m=+0.162606505 container init 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:02:42 np0005592157 podman[254708]: 2026-01-22 14:02:42.342743357 +0000 UTC m=+0.171741792 container start 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:02:42 np0005592157 podman[254708]: 2026-01-22 14:02:42.347150687 +0000 UTC m=+0.176149272 container attach 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:02:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]: {
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:    "0": [
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:        {
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "devices": [
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "/dev/loop3"
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            ],
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "lv_name": "ceph_lv0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "lv_size": "7511998464",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "name": "ceph_lv0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "tags": {
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.cluster_name": "ceph",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.crush_device_class": "",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.encrypted": "0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.osd_id": "0",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.type": "block",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:                "ceph.vdo": "0"
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            },
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "type": "block",
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:            "vg_name": "ceph_vg0"
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:        }
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]:    ]
Jan 22 09:02:43 np0005592157 jovial_goldberg[254726]: }
Jan 22 09:02:43 np0005592157 systemd[1]: libpod-347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27.scope: Deactivated successfully.
Jan 22 09:02:43 np0005592157 podman[254708]: 2026-01-22 14:02:43.199516608 +0000 UTC m=+1.028515053 container died 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:02:43 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:43 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e47ff189df86826dc31237cae862175a68f42ecae59603b76492311d108747a9-merged.mount: Deactivated successfully.
Jan 22 09:02:43 np0005592157 podman[254708]: 2026-01-22 14:02:43.25791866 +0000 UTC m=+1.086917095 container remove 347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:02:43 np0005592157 systemd[1]: libpod-conmon-347cf9f733b40d744df59eafbf60566fecdb232eeb603920e6a2b29541240e27.scope: Deactivated successfully.
Jan 22 09:02:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:43 np0005592157 podman[254885]: 2026-01-22 14:02:43.943872812 +0000 UTC m=+0.044446457 container create 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:02:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:43.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:43 np0005592157 systemd[1]: Started libpod-conmon-9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3.scope.
Jan 22 09:02:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:43.926177592 +0000 UTC m=+0.026751257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:44.029257766 +0000 UTC m=+0.129831431 container init 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:44.037403188 +0000 UTC m=+0.137976833 container start 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:44.041025268 +0000 UTC m=+0.141598923 container attach 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:02:44 np0005592157 friendly_cray[254901]: 167 167
Jan 22 09:02:44 np0005592157 systemd[1]: libpod-9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3.scope: Deactivated successfully.
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:44.044830483 +0000 UTC m=+0.145404138 container died 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 22 09:02:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-88fa0450b23d9c21c00c1190cb3885ebb3b3bc310f5f7ba220a91a98a1a35368-merged.mount: Deactivated successfully.
Jan 22 09:02:44 np0005592157 podman[254885]: 2026-01-22 14:02:44.090842777 +0000 UTC m=+0.191416422 container remove 9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:02:44 np0005592157 systemd[1]: libpod-conmon-9d50da2fc40e9aeceb4ddba8ea80cec632e7a4d49173060c8617e7f0a54256f3.scope: Deactivated successfully.
Jan 22 09:02:44 np0005592157 podman[254926]: 2026-01-22 14:02:44.290082963 +0000 UTC m=+0.048257561 container create 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:02:44 np0005592157 systemd[1]: Started libpod-conmon-174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee.scope.
Jan 22 09:02:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:44 np0005592157 podman[254926]: 2026-01-22 14:02:44.270084226 +0000 UTC m=+0.028258844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:02:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:02:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b12edb2165ce3b5921ebd1f6603950510d76ae8d3705e1e132d0675ccb98f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b12edb2165ce3b5921ebd1f6603950510d76ae8d3705e1e132d0675ccb98f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b12edb2165ce3b5921ebd1f6603950510d76ae8d3705e1e132d0675ccb98f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b12edb2165ce3b5921ebd1f6603950510d76ae8d3705e1e132d0675ccb98f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:02:44 np0005592157 podman[254926]: 2026-01-22 14:02:44.400494029 +0000 UTC m=+0.158668897 container init 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:02:44 np0005592157 podman[254926]: 2026-01-22 14:02:44.409453252 +0000 UTC m=+0.167627850 container start 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:02:44 np0005592157 podman[254926]: 2026-01-22 14:02:44.413200835 +0000 UTC m=+0.171375433 container attach 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:02:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:44 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:44 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]: {
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:        "osd_id": 0,
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:        "type": "bluestore"
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]:    }
Jan 22 09:02:45 np0005592157 hopeful_proskuriakova[254972]: }
Jan 22 09:02:45 np0005592157 systemd[1]: libpod-174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee.scope: Deactivated successfully.
Jan 22 09:02:45 np0005592157 podman[254926]: 2026-01-22 14:02:45.302699749 +0000 UTC m=+1.060874347 container died 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:02:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-837b12edb2165ce3b5921ebd1f6603950510d76ae8d3705e1e132d0675ccb98f-merged.mount: Deactivated successfully.
Jan 22 09:02:45 np0005592157 podman[254926]: 2026-01-22 14:02:45.364368752 +0000 UTC m=+1.122543350 container remove 174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:02:45 np0005592157 podman[255012]: 2026-01-22 14:02:45.367381677 +0000 UTC m=+0.098495301 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 09:02:45 np0005592157 systemd[1]: libpod-conmon-174befaf890cdc7d5c6eea2e56487de045ac79a0e4e27771bc409db6eb7195ee.scope: Deactivated successfully.
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:02:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 721f3cac-5bbf-4fad-a58c-856ded591c84 does not exist
Jan 22 09:02:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a690bb2c-c80e-4e59-b755-896c9b6d59e3 does not exist
Jan 22 09:02:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 13873f02-21fe-42c6-8028-542a99a11273 does not exist
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:02:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:02:46 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:02:47
Jan 22 09:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control']
Jan 22 09:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:02:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:02:47.570 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:02:47.570 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:02:47.571 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:02:47 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:49.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:50 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:50 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:50 np0005592157 podman[255099]: 2026-01-22 14:02:50.361991004 +0000 UTC m=+0.090083102 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:02:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:54 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:55 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:55 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:55.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:55.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:57.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:02:58 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:02:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:02:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:59.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:02:59 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:02:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:02:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:02:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:02:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:02:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:59.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:01 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:01.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:01.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:02 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:02 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:02 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:03:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:03:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:03.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:04 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:05.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:05 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:05 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:05.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:06 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:07 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:07.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:08 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1579 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:09 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:09 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1579 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:10 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:11 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:13 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:03:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:03:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:15.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:15 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:15 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:15 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:16 np0005592157 podman[255189]: 2026-01-22 14:03:16.321214021 +0000 UTC m=+0.048860476 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:16 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:17.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:17 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:18.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:18.990994) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090598991077, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2269, "num_deletes": 251, "total_data_size": 3408851, "memory_usage": 3469872, "flush_reason": "Manual Compaction"}
Jan 22 09:03:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599024167, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3322743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25714, "largest_seqno": 27982, "table_properties": {"data_size": 3313165, "index_size": 5624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24372, "raw_average_key_size": 21, "raw_value_size": 3291995, "raw_average_value_size": 2910, "num_data_blocks": 246, "num_entries": 1131, "num_filter_entries": 1131, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090422, "oldest_key_time": 1769090422, "file_creation_time": 1769090598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 33256 microseconds, and 9616 cpu microseconds.
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.024249) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3322743 bytes OK
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.024275) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.026289) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.026321) EVENT_LOG_v1 {"time_micros": 1769090599026313, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.026355) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3399221, prev total WAL file size 3399221, number of live WAL files 2.
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.027748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3244KB)], [59(7129KB)]
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599027886, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10623768, "oldest_snapshot_seqno": -1}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6642 keys, 8912562 bytes, temperature: kUnknown
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599098212, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8912562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8871616, "index_size": 23234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 173982, "raw_average_key_size": 26, "raw_value_size": 8753821, "raw_average_value_size": 1317, "num_data_blocks": 917, "num_entries": 6642, "num_filter_entries": 6642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090599, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.098707) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8912562 bytes
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.100369) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.7 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 7162, records dropped: 520 output_compression: NoCompression
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.100391) EVENT_LOG_v1 {"time_micros": 1769090599100380, "job": 32, "event": "compaction_finished", "compaction_time_micros": 70476, "compaction_time_cpu_micros": 26263, "output_level": 6, "num_output_files": 1, "total_output_size": 8912562, "num_input_records": 7162, "num_output_records": 6642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599101871, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599103569, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.027560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.103817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.103828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.103831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.103833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:03:19.103836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:03:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:20.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:20 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:20 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:21 np0005592157 podman[255211]: 2026-01-22 14:03:21.38503557 +0000 UTC m=+0.094195824 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:03:21 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:22 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:22 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:23 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:24.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:25 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:25 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:25.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:26.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:26 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:27.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:27 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:27 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:28 np0005592157 nova_compute[245707]: 2026-01-22 14:03:28.574 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:28 np0005592157 nova_compute[245707]: 2026-01-22 14:03:28.575 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:03:28 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:29 np0005592157 nova_compute[245707]: 2026-01-22 14:03:29.241 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:29 np0005592157 nova_compute[245707]: 2026-01-22 14:03:29.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:29 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:29 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:03:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:03:30 np0005592157 nova_compute[245707]: 2026-01-22 14:03:30.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:30 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:31 np0005592157 nova_compute[245707]: 2026-01-22 14:03:31.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:31.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:32.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.275 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.276 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.276 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:03:32 np0005592157 nova_compute[245707]: 2026-01-22 14:03:32.276 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:32 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.273 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.273 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.308 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.309 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.309 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.309 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.309 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:03:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:03:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364104935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.754 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:03:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.918 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.919 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5126MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.920 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:03:33 np0005592157 nova_compute[245707]: 2026-01-22 14:03:33.920 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:03:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:34.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.044 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.182 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:03:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280181550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.627 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.636 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.743 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.745 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:03:34 np0005592157 nova_compute[245707]: 2026-01-22 14:03:34.745 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:34 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:35 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:36.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:36 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:37 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:39 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:40 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:40 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:41 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:42.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:42 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:42 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:43.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:43 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:44.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:45 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:45 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:46 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:03:47 np0005592157 podman[255551]: 2026-01-22 14:03:47.066812104 +0000 UTC m=+0.087611530 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:03:47
Jan 22 09:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 22 09:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:03:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:03:47.571 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:03:47.572 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:03:47.572 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:03:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:48.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 317bae57-92f2-4186-a28b-106444d62ac6 does not exist
Jan 22 09:03:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 42273ee5-5745-42d5-9da7-7c4d091f0a7a does not exist
Jan 22 09:03:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6d81c462-f07f-41fb-b7f9-12f264650f68 does not exist
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:03:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.264907096 +0000 UTC m=+0.046384375 container create ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:03:49 np0005592157 systemd[1]: Started libpod-conmon-ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7.scope.
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.241801991 +0000 UTC m=+0.023279280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.368756369 +0000 UTC m=+0.150233658 container init ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.377210789 +0000 UTC m=+0.158688078 container start ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.381037254 +0000 UTC m=+0.162514773 container attach ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:03:49 np0005592157 pedantic_carver[255823]: 167 167
Jan 22 09:03:49 np0005592157 systemd[1]: libpod-ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7.scope: Deactivated successfully.
Jan 22 09:03:49 np0005592157 conmon[255823]: conmon ceebc4b7f440b4bc04b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7.scope/container/memory.events
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.391494684 +0000 UTC m=+0.172971963 container died ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:03:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-df73fc56726fc289f323d2b3735a66030723ee2915b9b2dd55f7fe9cc6455317-merged.mount: Deactivated successfully.
Jan 22 09:03:49 np0005592157 podman[255807]: 2026-01-22 14:03:49.433332205 +0000 UTC m=+0.214809484 container remove ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:03:49 np0005592157 systemd[1]: libpod-conmon-ceebc4b7f440b4bc04b270a689f490cf783df6dd837e9027cb351ad0266754d7.scope: Deactivated successfully.
Jan 22 09:03:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:49.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:49 np0005592157 podman[255847]: 2026-01-22 14:03:49.634378665 +0000 UTC m=+0.056271770 container create 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:03:49 np0005592157 systemd[1]: Started libpod-conmon-961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c.scope.
Jan 22 09:03:49 np0005592157 podman[255847]: 2026-01-22 14:03:49.614742277 +0000 UTC m=+0.036635412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:49 np0005592157 podman[255847]: 2026-01-22 14:03:49.814780452 +0000 UTC m=+0.236673647 container init 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:03:49 np0005592157 podman[255847]: 2026-01-22 14:03:49.827030297 +0000 UTC m=+0.248923442 container start 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:03:49 np0005592157 podman[255847]: 2026-01-22 14:03:49.832141054 +0000 UTC m=+0.254034259 container attach 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:03:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:50 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:50 np0005592157 naughty_goldwasser[255864]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:03:50 np0005592157 naughty_goldwasser[255864]: --> relative data size: 1.0
Jan 22 09:03:50 np0005592157 naughty_goldwasser[255864]: --> All data devices are unavailable
Jan 22 09:03:50 np0005592157 systemd[1]: libpod-961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c.scope: Deactivated successfully.
Jan 22 09:03:50 np0005592157 podman[255847]: 2026-01-22 14:03:50.694655647 +0000 UTC m=+1.116548782 container died 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:03:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0264f00b9fd03a97dbddd7e4a45703a70c319d8e971d72d47d6370783e5959c7-merged.mount: Deactivated successfully.
Jan 22 09:03:50 np0005592157 podman[255847]: 2026-01-22 14:03:50.762977197 +0000 UTC m=+1.184870302 container remove 961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldwasser, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:03:50 np0005592157 systemd[1]: libpod-conmon-961ed2daadd5fe60d3704d8dbffa28526fe1c7e732ee3ba91f3be944977f066c.scope: Deactivated successfully.
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.453965503 +0000 UTC m=+0.050239690 container create fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:03:51 np0005592157 systemd[1]: Started libpod-conmon-fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e.scope.
Jan 22 09:03:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.519305188 +0000 UTC m=+0.115579415 container init fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.528177079 +0000 UTC m=+0.124451276 container start fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.435797111 +0000 UTC m=+0.032071318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:51 np0005592157 magical_carson[256049]: 167 167
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.532073956 +0000 UTC m=+0.128348153 container attach fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:03:51 np0005592157 systemd[1]: libpod-fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e.scope: Deactivated successfully.
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.532785014 +0000 UTC m=+0.129059201 container died fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:03:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f67bdd34215e89cdeabcc88726f20baf66632a12364bf62e3adfbd9164a53201-merged.mount: Deactivated successfully.
Jan 22 09:03:51 np0005592157 podman[256031]: 2026-01-22 14:03:51.576737987 +0000 UTC m=+0.173012214 container remove fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:03:51 np0005592157 systemd[1]: libpod-conmon-fa86d251f11391e119b41903c4f95f95c0e6fd7f3bbaeb79ccfbf0125cfaa27e.scope: Deactivated successfully.
Jan 22 09:03:51 np0005592157 podman[256046]: 2026-01-22 14:03:51.625082119 +0000 UTC m=+0.120290723 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:03:51 np0005592157 podman[256097]: 2026-01-22 14:03:51.783403787 +0000 UTC m=+0.064745951 container create 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:03:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592157 systemd[1]: Started libpod-conmon-1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0.scope.
Jan 22 09:03:51 np0005592157 podman[256097]: 2026-01-22 14:03:51.750591911 +0000 UTC m=+0.031934135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f563e20d0c95e51ad22f3592f287247f5d93ca4bb7bf0455f3d61c853ff14730/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f563e20d0c95e51ad22f3592f287247f5d93ca4bb7bf0455f3d61c853ff14730/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f563e20d0c95e51ad22f3592f287247f5d93ca4bb7bf0455f3d61c853ff14730/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f563e20d0c95e51ad22f3592f287247f5d93ca4bb7bf0455f3d61c853ff14730/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:51 np0005592157 podman[256097]: 2026-01-22 14:03:51.890305156 +0000 UTC m=+0.171647330 container init 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:03:51 np0005592157 podman[256097]: 2026-01-22 14:03:51.903141115 +0000 UTC m=+0.184483249 container start 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:03:51 np0005592157 podman[256097]: 2026-01-22 14:03:51.907158465 +0000 UTC m=+0.188500599 container attach 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:03:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:52.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]: {
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:    "0": [
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:        {
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "devices": [
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "/dev/loop3"
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            ],
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "lv_name": "ceph_lv0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "lv_size": "7511998464",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "name": "ceph_lv0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "tags": {
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.cluster_name": "ceph",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.crush_device_class": "",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.encrypted": "0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.osd_id": "0",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.type": "block",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:                "ceph.vdo": "0"
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            },
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "type": "block",
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:            "vg_name": "ceph_vg0"
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:        }
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]:    ]
Jan 22 09:03:52 np0005592157 bold_brahmagupta[256113]: }
Jan 22 09:03:52 np0005592157 systemd[1]: libpod-1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0.scope: Deactivated successfully.
Jan 22 09:03:52 np0005592157 podman[256097]: 2026-01-22 14:03:52.686881948 +0000 UTC m=+0.968224082 container died 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:03:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f563e20d0c95e51ad22f3592f287247f5d93ca4bb7bf0455f3d61c853ff14730-merged.mount: Deactivated successfully.
Jan 22 09:03:52 np0005592157 podman[256097]: 2026-01-22 14:03:52.753918536 +0000 UTC m=+1.035260670 container remove 1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:03:52 np0005592157 systemd[1]: libpod-conmon-1bae614af53a33d61c56a26b35d8f99d563c1a6bd787a57011e332792f8612b0.scope: Deactivated successfully.
Jan 22 09:03:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:53.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:53 np0005592157 podman[256277]: 2026-01-22 14:03:53.55778724 +0000 UTC m=+0.055231075 container create e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:03:53 np0005592157 systemd[1]: Started libpod-conmon-e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488.scope.
Jan 22 09:03:53 np0005592157 podman[256277]: 2026-01-22 14:03:53.531764483 +0000 UTC m=+0.029208318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:53 np0005592157 podman[256277]: 2026-01-22 14:03:53.663158541 +0000 UTC m=+0.160602346 container init e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:03:53 np0005592157 podman[256277]: 2026-01-22 14:03:53.670133164 +0000 UTC m=+0.167576959 container start e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:03:53 np0005592157 podman[256277]: 2026-01-22 14:03:53.673686283 +0000 UTC m=+0.171130178 container attach e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:03:53 np0005592157 festive_mayer[256294]: 167 167
Jan 22 09:03:53 np0005592157 systemd[1]: libpod-e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488.scope: Deactivated successfully.
Jan 22 09:03:53 np0005592157 conmon[256294]: conmon e29584cfca8ab3d69ab3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488.scope/container/memory.events
Jan 22 09:03:53 np0005592157 podman[256299]: 2026-01-22 14:03:53.724177999 +0000 UTC m=+0.032464309 container died e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:03:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9f6b250957cd196af6b1f0906ba68077491a7881f58ab459414ee1d6413272aa-merged.mount: Deactivated successfully.
Jan 22 09:03:53 np0005592157 podman[256299]: 2026-01-22 14:03:53.765979498 +0000 UTC m=+0.074265788 container remove e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:03:53 np0005592157 systemd[1]: libpod-conmon-e29584cfca8ab3d69ab3cf58a2ef4f41e1a85bb430652a5896aba52adccad488.scope: Deactivated successfully.
Jan 22 09:03:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:54 np0005592157 podman[256321]: 2026-01-22 14:03:54.025903793 +0000 UTC m=+0.066406822 container create 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:03:54 np0005592157 systemd[1]: Started libpod-conmon-5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda.scope.
Jan 22 09:03:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:54.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:54 np0005592157 podman[256321]: 2026-01-22 14:03:53.995071266 +0000 UTC m=+0.035574345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:03:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:03:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2c7080ce6ad643d467ad4c05787b52c0f91bac3030a6fa9c30056daa9ab2bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2c7080ce6ad643d467ad4c05787b52c0f91bac3030a6fa9c30056daa9ab2bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2c7080ce6ad643d467ad4c05787b52c0f91bac3030a6fa9c30056daa9ab2bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2c7080ce6ad643d467ad4c05787b52c0f91bac3030a6fa9c30056daa9ab2bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:03:54 np0005592157 podman[256321]: 2026-01-22 14:03:54.296600036 +0000 UTC m=+0.337103065 container init 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:03:54 np0005592157 podman[256321]: 2026-01-22 14:03:54.31002929 +0000 UTC m=+0.350532319 container start 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:03:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:54 np0005592157 podman[256321]: 2026-01-22 14:03:54.515304376 +0000 UTC m=+0.555807415 container attach 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:03:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:55 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:55 np0005592157 bold_shockley[256337]: {
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:        "osd_id": 0,
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:        "type": "bluestore"
Jan 22 09:03:55 np0005592157 bold_shockley[256337]:    }
Jan 22 09:03:55 np0005592157 bold_shockley[256337]: }
Jan 22 09:03:55 np0005592157 systemd[1]: libpod-5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda.scope: Deactivated successfully.
Jan 22 09:03:55 np0005592157 podman[256321]: 2026-01-22 14:03:55.226245479 +0000 UTC m=+1.266748508 container died 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:03:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ee2c7080ce6ad643d467ad4c05787b52c0f91bac3030a6fa9c30056daa9ab2bf-merged.mount: Deactivated successfully.
Jan 22 09:03:55 np0005592157 podman[256321]: 2026-01-22 14:03:55.293347758 +0000 UTC m=+1.333850747 container remove 5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:03:55 np0005592157 systemd[1]: libpod-conmon-5ca9c306936b1fb5c83d12806ae1c9404a239ce9c3a8ebf383ed8422cce2bcda.scope: Deactivated successfully.
Jan 22 09:03:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:03:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:03:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 01e99895-e168-4aab-a215-03c9db0355ce does not exist
Jan 22 09:03:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4c196666-da85-4428-9355-b4733756e196 does not exist
Jan 22 09:03:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0b915327-0c8a-4302-b9ff-56c018f9965e does not exist
Jan 22 09:03:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:56.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:57 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:03:58 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:03:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:03:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:59.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:03:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:59 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:00 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:01 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:04:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:04:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:04.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:04 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:05 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:05.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:06 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:07.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:07 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:04:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:08.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:04:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:09 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:10.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:10 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:11.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:12.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:12 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:13 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:13.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:14.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:15.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:16 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:16 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:16.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:17 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:17 np0005592157 podman[256485]: 2026-01-22 14:04:17.364236667 +0000 UTC m=+0.085891433 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:04:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:18.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:18 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:19 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:19.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:20.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:20 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:20 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:21 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:22.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:22 np0005592157 podman[256507]: 2026-01-22 14:04:22.40689916 +0000 UTC m=+0.130170937 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:04:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:22 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:23 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:24.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:24 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:25.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:25 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:26.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:27 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:28.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:28 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:29 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:04:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:29.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:04:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:29 np0005592157 nova_compute[245707]: 2026-01-22 14:04:29.716 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:29 np0005592157 nova_compute[245707]: 2026-01-22 14:04:29.717 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:04:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:30 np0005592157 nova_compute[245707]: 2026-01-22 14:04:30.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:30 np0005592157 nova_compute[245707]: 2026-01-22 14:04:30.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:30 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:30 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:31 np0005592157 nova_compute[245707]: 2026-01-22 14:04:31.241 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:31 np0005592157 nova_compute[245707]: 2026-01-22 14:04:31.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:04:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 9208 writes, 35K keys, 9208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9208 writes, 2077 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 773 writes, 1765 keys, 773 commit groups, 1.0 writes per commit group, ingest: 0.99 MB, 0.00 MB/s#012Interval WAL: 773 writes, 335 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:04:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.266 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:04:32 np0005592157 nova_compute[245707]: 2026-01-22 14:04:32.267 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:32 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.280 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.281 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.281 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.282 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.282 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:04:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:04:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835375088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.740 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.945 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.946 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.947 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:33 np0005592157 nova_compute[245707]: 2026-01-22 14:04:33.947 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.049 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.050 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.050 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.050 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.106 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:04:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:34.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2981244524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.537 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.545 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.564 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.566 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:04:34 np0005592157 nova_compute[245707]: 2026-01-22 14:04:34.566 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:34 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:35.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:35 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:36 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:38.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:38 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:39 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:40.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:40 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:40 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:41 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:04:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:42.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:04:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:42 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:42 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:43.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:44.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:44 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:04:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:45 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:04:47
Jan 22 09:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'default.rgw.meta']
Jan 22 09:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:04:47 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:04:47.572 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:04:47.574 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:04:47.574 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:47.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:48.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:48 np0005592157 podman[256691]: 2026-01-22 14:04:48.350154447 +0000 UTC m=+0.080601081 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:04:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:49.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:49 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:50.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:51 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:51.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:52.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:53 np0005592157 podman[256715]: 2026-01-22 14:04:53.428906602 +0000 UTC m=+0.158798961 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 22 09:04:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:53.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:54 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:54.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:55 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:55 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:04:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:04:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:04:58 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.827204884 +0000 UTC m=+0.044808818 container create 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:04:58 np0005592157 systemd[1]: Started libpod-conmon-57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c.scope.
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.806521139 +0000 UTC m=+0.024125103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:04:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.93972885 +0000 UTC m=+0.157332875 container init 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.948897229 +0000 UTC m=+0.166501193 container start 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.952660193 +0000 UTC m=+0.170264147 container attach 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:04:58 np0005592157 systemd[1]: libpod-57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c.scope: Deactivated successfully.
Jan 22 09:04:58 np0005592157 trusting_ganguly[257157]: 167 167
Jan 22 09:04:58 np0005592157 conmon[257157]: conmon 57eaa95b17b9e997ab47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c.scope/container/memory.events
Jan 22 09:04:58 np0005592157 podman[257141]: 2026-01-22 14:04:58.960019266 +0000 UTC m=+0.177623210 container died 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:04:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-df37afa6750e333821869e09b50a0491c9d0872110b7dbb08d0b5cec9d9d1028-merged.mount: Deactivated successfully.
Jan 22 09:04:59 np0005592157 podman[257141]: 2026-01-22 14:04:59.039349315 +0000 UTC m=+0.256953259 container remove 57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:04:59 np0005592157 systemd[1]: libpod-conmon-57eaa95b17b9e997ab47423848a6b34bc7ae7a91db048e14290cff1af560d77c.scope: Deactivated successfully.
Jan 22 09:04:59 np0005592157 podman[257183]: 2026-01-22 14:04:59.239166938 +0000 UTC m=+0.064840008 container create 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592157 systemd[1]: Started libpod-conmon-6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad.scope.
Jan 22 09:04:59 np0005592157 podman[257183]: 2026-01-22 14:04:59.216447991 +0000 UTC m=+0.042121061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:04:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:04:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764ba5bce693ef97afc2194f6d0924f1eb78bde9eecc90e2f6a491b500835acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764ba5bce693ef97afc2194f6d0924f1eb78bde9eecc90e2f6a491b500835acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764ba5bce693ef97afc2194f6d0924f1eb78bde9eecc90e2f6a491b500835acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764ba5bce693ef97afc2194f6d0924f1eb78bde9eecc90e2f6a491b500835acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:59 np0005592157 podman[257183]: 2026-01-22 14:04:59.359104339 +0000 UTC m=+0.184777399 container init 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:04:59 np0005592157 podman[257183]: 2026-01-22 14:04:59.368084633 +0000 UTC m=+0.193757663 container start 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 22 09:04:59 np0005592157 podman[257183]: 2026-01-22 14:04:59.372100983 +0000 UTC m=+0.197774083 container attach 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:04:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:00.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]: [
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:    {
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "available": false,
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "ceph_device": false,
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "lsm_data": {},
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "lvs": [],
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "path": "/dev/sr0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "rejected_reasons": [
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "Insufficient space (<5GB)",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "Has a FileSystem"
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        ],
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        "sys_api": {
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "actuators": null,
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "device_nodes": "sr0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "devname": "sr0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "human_readable_size": "482.00 KB",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "id_bus": "ata",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "model": "QEMU DVD-ROM",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "nr_requests": "2",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "parent": "/dev/sr0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "partitions": {},
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "path": "/dev/sr0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "removable": "1",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "rev": "2.5+",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "ro": "0",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "rotational": "1",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "sas_address": "",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "sas_device_handle": "",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "scheduler_mode": "mq-deadline",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "sectors": 0,
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "sectorsize": "2048",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "size": 493568.0,
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "support_discard": "2048",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "type": "disk",
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:            "vendor": "QEMU"
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:        }
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]:    }
Jan 22 09:05:00 np0005592157 elegant_chaum[257200]: ]
Jan 22 09:05:00 np0005592157 systemd[1]: libpod-6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad.scope: Deactivated successfully.
Jan 22 09:05:00 np0005592157 systemd[1]: libpod-6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad.scope: Consumed 1.418s CPU time.
Jan 22 09:05:00 np0005592157 podman[257183]: 2026-01-22 14:05:00.758831105 +0000 UTC m=+1.584504145 container died 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:05:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-764ba5bce693ef97afc2194f6d0924f1eb78bde9eecc90e2f6a491b500835acc-merged.mount: Deactivated successfully.
Jan 22 09:05:00 np0005592157 podman[257183]: 2026-01-22 14:05:00.829735323 +0000 UTC m=+1.655408353 container remove 6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:05:00 np0005592157 systemd[1]: libpod-conmon-6c6b363e58deff2dd886882d552d8b5edad00ae8fc181d24bc22b6950ca92dad.scope: Deactivated successfully.
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:05:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2613d392-89bd-4a7c-b22a-8960ac97ea76 does not exist
Jan 22 09:05:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 37a12494-7448-40ba-a44e-fe3e46de19ef does not exist
Jan 22 09:05:01 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 77acc4a1-918c-43ef-a2bd-ffff1433bc47 does not exist
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:05:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.769260193 +0000 UTC m=+0.057571547 container create 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:05:01 np0005592157 systemd[1]: Started libpod-conmon-32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9.scope.
Jan 22 09:05:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.746758702 +0000 UTC m=+0.035070036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.854663483 +0000 UTC m=+0.142974867 container init 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.861001151 +0000 UTC m=+0.149312495 container start 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.865323128 +0000 UTC m=+0.153634572 container attach 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:05:01 np0005592157 pensive_herschel[258653]: 167 167
Jan 22 09:05:01 np0005592157 systemd[1]: libpod-32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9.scope: Deactivated successfully.
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.870008605 +0000 UTC m=+0.158319959 container died 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:05:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f3c5cf0936135ad1d80589a0f6cfb65f409ea11d8d99cf1a3147fff2936eb778-merged.mount: Deactivated successfully.
Jan 22 09:05:01 np0005592157 podman[258637]: 2026-01-22 14:05:01.923803357 +0000 UTC m=+0.212114671 container remove 32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:05:01 np0005592157 systemd[1]: libpod-conmon-32967bd0f2b7e08f90b47cc349e9b6930a049b8388f186dfee00ded7aa8e8ea9.scope: Deactivated successfully.
Jan 22 09:05:02 np0005592157 podman[258678]: 2026-01-22 14:05:02.166712395 +0000 UTC m=+0.065510735 container create d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:05:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:02 np0005592157 systemd[1]: Started libpod-conmon-d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff.scope.
Jan 22 09:05:02 np0005592157 podman[258678]: 2026-01-22 14:05:02.145354152 +0000 UTC m=+0.044152532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:02 np0005592157 podman[258678]: 2026-01-22 14:05:02.275899638 +0000 UTC m=+0.174697998 container init d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:05:02 np0005592157 podman[258678]: 2026-01-22 14:05:02.285447286 +0000 UTC m=+0.184245646 container start d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:05:02 np0005592157 podman[258678]: 2026-01-22 14:05:02.289307482 +0000 UTC m=+0.188105942 container attach d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:05:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:02 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592157 eloquent_archimedes[258695]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:05:03 np0005592157 eloquent_archimedes[258695]: --> relative data size: 1.0
Jan 22 09:05:03 np0005592157 eloquent_archimedes[258695]: --> All data devices are unavailable
Jan 22 09:05:03 np0005592157 systemd[1]: libpod-d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff.scope: Deactivated successfully.
Jan 22 09:05:03 np0005592157 podman[258710]: 2026-01-22 14:05:03.226232597 +0000 UTC m=+0.027661311 container died d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:05:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f4ff210d2f2b1d557763620113ffa228fa3ffb6cbd60b30e4c9f6dc1ee8675aa-merged.mount: Deactivated successfully.
Jan 22 09:05:03 np0005592157 podman[258710]: 2026-01-22 14:05:03.279803253 +0000 UTC m=+0.081231927 container remove d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:05:03 np0005592157 systemd[1]: libpod-conmon-d8aea868cb457108f804449046816680ebc1f02f6ce4ccd2cfb079d89f657dff.scope: Deactivated successfully.
Jan 22 09:05:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:05:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:05:03 np0005592157 podman[258867]: 2026-01-22 14:05:03.96980772 +0000 UTC m=+0.053154616 container create e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:05:04 np0005592157 systemd[1]: Started libpod-conmon-e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3.scope.
Jan 22 09:05:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:03.949205767 +0000 UTC m=+0.032552663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:04.061564249 +0000 UTC m=+0.144911235 container init e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:04.069271391 +0000 UTC m=+0.152618317 container start e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:04.073502136 +0000 UTC m=+0.156849082 container attach e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:05:04 np0005592157 tender_black[258884]: 167 167
Jan 22 09:05:04 np0005592157 systemd[1]: libpod-e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3.scope: Deactivated successfully.
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:04.076089011 +0000 UTC m=+0.159435997 container died e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:05:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-29d1874a18c6c4190d52f48dc45e88267618c26d8c518b020a1e7de1abeedf9b-merged.mount: Deactivated successfully.
Jan 22 09:05:04 np0005592157 podman[258867]: 2026-01-22 14:05:04.121613626 +0000 UTC m=+0.204960512 container remove e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:05:04 np0005592157 systemd[1]: libpod-conmon-e96059458da9b154ef72e43ceef679df355df3779250c939dec1e1098ac927c3.scope: Deactivated successfully.
Jan 22 09:05:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:04 np0005592157 podman[258910]: 2026-01-22 14:05:04.335875148 +0000 UTC m=+0.051611168 container create d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:05:04 np0005592157 systemd[1]: Started libpod-conmon-d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435.scope.
Jan 22 09:05:04 np0005592157 podman[258910]: 2026-01-22 14:05:04.31309639 +0000 UTC m=+0.028832410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367c0a2d0e8e6305de4ece74f156d5d355051d5fbdab4300eab576a2ed18f6c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367c0a2d0e8e6305de4ece74f156d5d355051d5fbdab4300eab576a2ed18f6c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367c0a2d0e8e6305de4ece74f156d5d355051d5fbdab4300eab576a2ed18f6c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367c0a2d0e8e6305de4ece74f156d5d355051d5fbdab4300eab576a2ed18f6c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:04 np0005592157 podman[258910]: 2026-01-22 14:05:04.438102598 +0000 UTC m=+0.153838648 container init d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:05:04 np0005592157 podman[258910]: 2026-01-22 14:05:04.451153133 +0000 UTC m=+0.166889113 container start d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:05:04 np0005592157 podman[258910]: 2026-01-22 14:05:04.454919347 +0000 UTC m=+0.170655407 container attach d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:05:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:04 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:04 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:05 np0005592157 nova_compute[245707]: 2026-01-22 14:05:05.142 245711 DEBUG oslo_concurrency.lockutils [None req-090c3737-100b-4a76-ac6d-736ff1be1c3e fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "18becd7f-5901-49d8-87eb-548e630001aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:05 np0005592157 sad_germain[258926]: {
Jan 22 09:05:05 np0005592157 sad_germain[258926]:    "0": [
Jan 22 09:05:05 np0005592157 sad_germain[258926]:        {
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "devices": [
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "/dev/loop3"
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            ],
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "lv_name": "ceph_lv0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "lv_size": "7511998464",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "name": "ceph_lv0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "tags": {
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.cluster_name": "ceph",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.crush_device_class": "",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.encrypted": "0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.osd_id": "0",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.type": "block",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:                "ceph.vdo": "0"
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            },
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "type": "block",
Jan 22 09:05:05 np0005592157 sad_germain[258926]:            "vg_name": "ceph_vg0"
Jan 22 09:05:05 np0005592157 sad_germain[258926]:        }
Jan 22 09:05:05 np0005592157 sad_germain[258926]:    ]
Jan 22 09:05:05 np0005592157 sad_germain[258926]: }
Jan 22 09:05:05 np0005592157 systemd[1]: libpod-d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435.scope: Deactivated successfully.
Jan 22 09:05:05 np0005592157 podman[258935]: 2026-01-22 14:05:05.319319484 +0000 UTC m=+0.039856315 container died d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:05:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-367c0a2d0e8e6305de4ece74f156d5d355051d5fbdab4300eab576a2ed18f6c5-merged.mount: Deactivated successfully.
Jan 22 09:05:05 np0005592157 podman[258935]: 2026-01-22 14:05:05.375024773 +0000 UTC m=+0.095561604 container remove d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:05:05 np0005592157 systemd[1]: libpod-conmon-d15bc7c61ae647dcfb1daa731aff9bfdec01ad4e858c9c8da13c4c0bdaae3435.scope: Deactivated successfully.
Jan 22 09:05:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:05 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:05 np0005592157 podman[259140]: 2026-01-22 14:05:05.986074771 +0000 UTC m=+0.041551767 container create b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:05:06 np0005592157 systemd[1]: Started libpod-conmon-b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0.scope.
Jan 22 09:05:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:05.967877337 +0000 UTC m=+0.023354363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:06.07703613 +0000 UTC m=+0.132513146 container init b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:06.084343182 +0000 UTC m=+0.139820178 container start b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:06.087570602 +0000 UTC m=+0.143047818 container attach b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:05:06 np0005592157 beautiful_elion[259156]: 167 167
Jan 22 09:05:06 np0005592157 systemd[1]: libpod-b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0.scope: Deactivated successfully.
Jan 22 09:05:06 np0005592157 conmon[259156]: conmon b2996ae0873e6b728371 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0.scope/container/memory.events
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:06.095067399 +0000 UTC m=+0.150544445 container died b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:05:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4b3b1fc25719f34f026f1e183e6eb898b6e9f817969255d96c73f905717a6cdb-merged.mount: Deactivated successfully.
Jan 22 09:05:06 np0005592157 podman[259140]: 2026-01-22 14:05:06.138686447 +0000 UTC m=+0.194163443 container remove b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:05:06 np0005592157 systemd[1]: libpod-conmon-b2996ae0873e6b728371bc5a39e2fc3451a0d9efe629be170c4fefef39ce1fd0.scope: Deactivated successfully.
Jan 22 09:05:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:06 np0005592157 podman[259179]: 2026-01-22 14:05:06.312234805 +0000 UTC m=+0.048850839 container create 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:05:06 np0005592157 systemd[1]: Started libpod-conmon-6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6.scope.
Jan 22 09:05:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:05:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebf18341542e74227ca9708eb35e83331c23f3137485c213f431bf27bede0e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebf18341542e74227ca9708eb35e83331c23f3137485c213f431bf27bede0e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebf18341542e74227ca9708eb35e83331c23f3137485c213f431bf27bede0e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebf18341542e74227ca9708eb35e83331c23f3137485c213f431bf27bede0e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:05:06 np0005592157 podman[259179]: 2026-01-22 14:05:06.288091573 +0000 UTC m=+0.024707597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:05:06 np0005592157 podman[259179]: 2026-01-22 14:05:06.401718597 +0000 UTC m=+0.138334661 container init 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:05:06 np0005592157 podman[259179]: 2026-01-22 14:05:06.415702995 +0000 UTC m=+0.152318989 container start 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:05:06 np0005592157 podman[259179]: 2026-01-22 14:05:06.420459744 +0000 UTC m=+0.157075788 container attach 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:05:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:06 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]: {
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:        "osd_id": 0,
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:        "type": "bluestore"
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]:    }
Jan 22 09:05:07 np0005592157 keen_ishizaka[259195]: }
Jan 22 09:05:07 np0005592157 systemd[1]: libpod-6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6.scope: Deactivated successfully.
Jan 22 09:05:07 np0005592157 podman[259179]: 2026-01-22 14:05:07.389518381 +0000 UTC m=+1.126134375 container died 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:05:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3ebf18341542e74227ca9708eb35e83331c23f3137485c213f431bf27bede0e3-merged.mount: Deactivated successfully.
Jan 22 09:05:07 np0005592157 podman[259179]: 2026-01-22 14:05:07.448169093 +0000 UTC m=+1.184785087 container remove 6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ishizaka, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:05:07 np0005592157 systemd[1]: libpod-conmon-6330faf3edb347f243bb7a85879b825bcc75abd9393bec90a9692c03f23d52b6.scope: Deactivated successfully.
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7acc5f69-2129-43b6-982d-5928fcff13f1 does not exist
Jan 22 09:05:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f88342bf-35eb-46b3-9056-0f0925063fd4 does not exist
Jan 22 09:05:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 88510327-86ac-49ea-b8f0-271b3e6ed15d does not exist
Jan 22 09:05:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:07.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:09.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:10 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:10.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:11 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:11 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:11.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:12 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:12.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:13 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:13.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:14.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:15 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:15 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:16.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:16 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:17 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:05:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:05:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:05:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:05:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:18.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:19 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:19 np0005592157 podman[259284]: 2026-01-22 14:05:19.349061634 +0000 UTC m=+0.074849918 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:05:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:19.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:20.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:20 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:20 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:21 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:22.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:22 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:23.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:23 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:24 np0005592157 podman[259307]: 2026-01-22 14:05:24.37175781 +0000 UTC m=+0.110672441 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 09:05:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:25 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:25 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:26 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:27 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:28 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:29 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:29 np0005592157 nova_compute[245707]: 2026-01-22 14:05:29.567 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:29 np0005592157 nova_compute[245707]: 2026-01-22 14:05:29.567 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:05:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:29.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:30 np0005592157 nova_compute[245707]: 2026-01-22 14:05:30.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:30 np0005592157 nova_compute[245707]: 2026-01-22 14:05:30.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:05:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:30.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:30 np0005592157 nova_compute[245707]: 2026-01-22 14:05:30.297 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:05:30 np0005592157 nova_compute[245707]: 2026-01-22 14:05:30.298 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:30 np0005592157 nova_compute[245707]: 2026-01-22 14:05:30.298 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:05:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:31 np0005592157 nova_compute[245707]: 2026-01-22 14:05:31.319 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:31 np0005592157 nova_compute[245707]: 2026-01-22 14:05:31.320 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:31.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:32 np0005592157 nova_compute[245707]: 2026-01-22 14:05:32.240 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:32 np0005592157 nova_compute[245707]: 2026-01-22 14:05:32.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:32.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.473 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.473 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.474 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.474 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.475 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.595 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.595 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.595 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.596 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:05:33 np0005592157 nova_compute[245707]: 2026-01-22 14:05:33.596 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:05:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:33.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:05:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659360568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.081 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.246 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.247 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.248 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.248 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:34.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.412 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.412 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.413 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.413 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:05:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.511 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing inventories for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.581 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating ProviderTree inventory for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.582 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating inventory in ProviderTree for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.618 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing aggregate associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:05:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.647 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing trait associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:05:34 np0005592157 nova_compute[245707]: 2026-01-22 14:05:34.720 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:05:35 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:35 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:05:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439210580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.180 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.186 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.208 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.236 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.236 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:35 np0005592157 nova_compute[245707]: 2026-01-22 14:05:35.237 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:35.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:36.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:36 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592157 nova_compute[245707]: 2026-01-22 14:05:37.016 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:37 np0005592157 nova_compute[245707]: 2026-01-22 14:05:37.053 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:37 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:38.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:38 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.950 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.995 245711 WARNING nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.995 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid 18becd7f-5901-49d8-87eb-548e630001aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.996 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid 1089392f-9bda-4904-9370-95fc2c3dd7c2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.996 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "18becd7f-5901-49d8-87eb-548e630001aa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:38 np0005592157 nova_compute[245707]: 2026-01-22 14:05:38.997 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "1089392f-9bda-4904-9370-95fc2c3dd7c2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:39.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:39 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:40 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:41.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:41 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:42.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:43 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:44 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:45 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:45 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:45.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:46 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:05:47
Jan 22 09:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 22 09:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:05:47 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:05:47.573 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:05:47.574 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:05:47.574 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:50.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:50 np0005592157 podman[259493]: 2026-01-22 14:05:50.32581313 +0000 UTC m=+0.059449594 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 22 09:05:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:50 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:50 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:52.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:54 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:55 np0005592157 podman[259514]: 2026-01-22 14:05:55.350020331 +0000 UTC m=+0.085569165 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:05:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:05:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:05:55 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:57 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:05:58 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:05:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:05:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:59.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:05:59 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:00.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 09:06:00 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:01.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:01 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:02.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:06:03 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:03.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:06:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:06:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:04.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:06:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:04 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:04 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:05.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.829199) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765829290, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2482, "num_deletes": 506, "total_data_size": 3390739, "memory_usage": 3452320, "flush_reason": "Manual Compaction"}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765868253, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27983, "largest_seqno": 30464, "table_properties": {"data_size": 3266581, "index_size": 5556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27698, "raw_average_key_size": 20, "raw_value_size": 3242643, "raw_average_value_size": 2389, "num_data_blocks": 243, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090600, "oldest_key_time": 1769090600, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 39150 microseconds, and 14232 cpu microseconds.
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.868366) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3276667 bytes OK
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.868541) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873114) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873188) EVENT_LOG_v1 {"time_micros": 1769090765873169, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3379279, prev total WAL file size 3379279, number of live WAL files 2.
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.875778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3199KB)], [62(8703KB)]
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765875879, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12189229, "oldest_snapshot_seqno": -1}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6969 keys, 10360176 bytes, temperature: kUnknown
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765965515, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10360176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10315827, "index_size": 25805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183756, "raw_average_key_size": 26, "raw_value_size": 10190717, "raw_average_value_size": 1462, "num_data_blocks": 1022, "num_entries": 6969, "num_filter_entries": 6969, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.966226) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10360176 bytes
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.968272) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.8 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7999, records dropped: 1030 output_compression: NoCompression
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.968318) EVENT_LOG_v1 {"time_micros": 1769090765968300, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89730, "compaction_time_cpu_micros": 37425, "output_level": 6, "num_output_files": 1, "total_output_size": 10360176, "num_input_records": 7999, "num_output_records": 6969, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765969286, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765971883, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.874910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.972030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.972039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.972042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.972044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:06:05.972047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:06:06 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:07 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 561f73a2-735e-4cf5-9230-7fcfdfb0bcd9 does not exist
Jan 22 09:06:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1e457667-4333-4ec0-b639-479d28e29331 does not exist
Jan 22 09:06:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7a189f48-1f1b-4ad7-be50-06aed4a4f6a6 does not exist
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.149409999 +0000 UTC m=+0.047289310 container create 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:06:10 np0005592157 systemd[1]: Started libpod-conmon-086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78.scope.
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.127642746 +0000 UTC m=+0.025522077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:10 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:06:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:06:10 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.263750721 +0000 UTC m=+0.161630102 container init 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.278585331 +0000 UTC m=+0.176464652 container start 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.283004781 +0000 UTC m=+0.180884092 container attach 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:06:10 np0005592157 elegant_ellis[259887]: 167 167
Jan 22 09:06:10 np0005592157 systemd[1]: libpod-086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78.scope: Deactivated successfully.
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.288046357 +0000 UTC m=+0.185925678 container died 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:06:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-81490c552facc9f1df43debec3a2515aaa47cfc87abd78108f6fe9c756756834-merged.mount: Deactivated successfully.
Jan 22 09:06:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:10.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:10 np0005592157 podman[259871]: 2026-01-22 14:06:10.344756831 +0000 UTC m=+0.242636182 container remove 086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:06:10 np0005592157 systemd[1]: libpod-conmon-086aa0b2d1183b216df7c622d2dde51bcd4af36d900ec6765e171b0116303f78.scope: Deactivated successfully.
Jan 22 09:06:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:06:10 np0005592157 podman[259911]: 2026-01-22 14:06:10.573264869 +0000 UTC m=+0.060775356 container create eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:06:10 np0005592157 systemd[1]: Started libpod-conmon-eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6.scope.
Jan 22 09:06:10 np0005592157 podman[259911]: 2026-01-22 14:06:10.542684877 +0000 UTC m=+0.030195464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:10 np0005592157 podman[259911]: 2026-01-22 14:06:10.668492144 +0000 UTC m=+0.156002711 container init eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:06:10 np0005592157 podman[259911]: 2026-01-22 14:06:10.681123799 +0000 UTC m=+0.168634296 container start eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:06:10 np0005592157 podman[259911]: 2026-01-22 14:06:10.685514239 +0000 UTC m=+0.173024776 container attach eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:06:11 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:11 np0005592157 jolly_cartwright[259927]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:06:11 np0005592157 jolly_cartwright[259927]: --> relative data size: 1.0
Jan 22 09:06:11 np0005592157 jolly_cartwright[259927]: --> All data devices are unavailable
Jan 22 09:06:11 np0005592157 systemd[1]: libpod-eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6.scope: Deactivated successfully.
Jan 22 09:06:11 np0005592157 podman[259911]: 2026-01-22 14:06:11.582910448 +0000 UTC m=+1.070420955 container died eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:06:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f0db268bb6afcf0eb760493d09a7fb36a5b555d99d45960a07b062b4f106d580-merged.mount: Deactivated successfully.
Jan 22 09:06:11 np0005592157 podman[259911]: 2026-01-22 14:06:11.652592416 +0000 UTC m=+1.140102913 container remove eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:06:11 np0005592157 systemd[1]: libpod-conmon-eba54b173d6f7ff3e7671f1058656aa8e451099d1bf38d15a7404fab8655ddb6.scope: Deactivated successfully.
Jan 22 09:06:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:11.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:12.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.452948334 +0000 UTC m=+0.035161408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.560414354 +0000 UTC m=+0.142627438 container create c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:06:12 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:12 np0005592157 systemd[1]: Started libpod-conmon-c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75.scope.
Jan 22 09:06:12 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.674613992 +0000 UTC m=+0.256827126 container init c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.684680673 +0000 UTC m=+0.266893767 container start c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 22 09:06:12 np0005592157 interesting_rubin[260113]: 167 167
Jan 22 09:06:12 np0005592157 systemd[1]: libpod-c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75.scope: Deactivated successfully.
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.697695668 +0000 UTC m=+0.279908802 container attach c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.698677242 +0000 UTC m=+0.280890376 container died c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 22 09:06:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7bb63ca20754534ae63b5b0eb0b0df05c64b259e305a9ed962992b30902e71b5-merged.mount: Deactivated successfully.
Jan 22 09:06:12 np0005592157 podman[260097]: 2026-01-22 14:06:12.765770995 +0000 UTC m=+0.347984059 container remove c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:06:12 np0005592157 systemd[1]: libpod-conmon-c1c79cb95139d71529efd2a882da89dc072939df23930d427141c0add7151b75.scope: Deactivated successfully.
Jan 22 09:06:12 np0005592157 podman[260137]: 2026-01-22 14:06:12.977718181 +0000 UTC m=+0.060410958 container create f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:06:13 np0005592157 systemd[1]: Started libpod-conmon-f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b.scope.
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:12.946190595 +0000 UTC m=+0.028883422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f5be80d7b5a9c42bf0dad963ce1be10803a0d480cd6cef4f25e26b2c823c5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f5be80d7b5a9c42bf0dad963ce1be10803a0d480cd6cef4f25e26b2c823c5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f5be80d7b5a9c42bf0dad963ce1be10803a0d480cd6cef4f25e26b2c823c5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f5be80d7b5a9c42bf0dad963ce1be10803a0d480cd6cef4f25e26b2c823c5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:13.087003636 +0000 UTC m=+0.169696403 container init f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:13.100698638 +0000 UTC m=+0.183391375 container start f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:13.10480976 +0000 UTC m=+0.187502507 container attach f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:06:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:13.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]: {
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:    "0": [
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:        {
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "devices": [
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "/dev/loop3"
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            ],
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "lv_name": "ceph_lv0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "lv_size": "7511998464",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "name": "ceph_lv0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "tags": {
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.cluster_name": "ceph",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.crush_device_class": "",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.encrypted": "0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.osd_id": "0",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.type": "block",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:                "ceph.vdo": "0"
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            },
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "type": "block",
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:            "vg_name": "ceph_vg0"
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:        }
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]:    ]
Jan 22 09:06:13 np0005592157 romantic_volhard[260153]: }
Jan 22 09:06:13 np0005592157 systemd[1]: libpod-f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b.scope: Deactivated successfully.
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:13.893742565 +0000 UTC m=+0.976435342 container died f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:06:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c2f5be80d7b5a9c42bf0dad963ce1be10803a0d480cd6cef4f25e26b2c823c5d-merged.mount: Deactivated successfully.
Jan 22 09:06:13 np0005592157 podman[260137]: 2026-01-22 14:06:13.961797712 +0000 UTC m=+1.044490459 container remove f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:06:13 np0005592157 systemd[1]: libpod-conmon-f8b0d89663755033e4244fa65a739cef49e523ec133fd3ac60259b3f7c25d54b.scope: Deactivated successfully.
Jan 22 09:06:14 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:14.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.790759055 +0000 UTC m=+0.056393778 container create 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:06:14 np0005592157 systemd[1]: Started libpod-conmon-807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c.scope.
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.760457629 +0000 UTC m=+0.026092362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.879243121 +0000 UTC m=+0.144877924 container init 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.885709383 +0000 UTC m=+0.151344136 container start 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.89003216 +0000 UTC m=+0.155666913 container attach 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:06:14 np0005592157 modest_davinci[260332]: 167 167
Jan 22 09:06:14 np0005592157 systemd[1]: libpod-807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c.scope: Deactivated successfully.
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.895093507 +0000 UTC m=+0.160728250 container died 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:06:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-33f9d81a7bc4e23db9e90be24716a4369f91cd016c4f97fe3427a30e04c25de3-merged.mount: Deactivated successfully.
Jan 22 09:06:14 np0005592157 podman[260316]: 2026-01-22 14:06:14.942092749 +0000 UTC m=+0.207727492 container remove 807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:06:14 np0005592157 systemd[1]: libpod-conmon-807784ff8f31f1f53aadad7d5e31e115c56ecc036aaf05d003c1c285467fe96c.scope: Deactivated successfully.
Jan 22 09:06:15 np0005592157 podman[260355]: 2026-01-22 14:06:15.189431887 +0000 UTC m=+0.061054944 container create ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:06:15 np0005592157 systemd[1]: Started libpod-conmon-ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3.scope.
Jan 22 09:06:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:06:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba4b42e8660661a7cc17f2ca545390a27e92fa190ce52fa421e3d337899c051/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba4b42e8660661a7cc17f2ca545390a27e92fa190ce52fa421e3d337899c051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba4b42e8660661a7cc17f2ca545390a27e92fa190ce52fa421e3d337899c051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:15 np0005592157 podman[260355]: 2026-01-22 14:06:15.169198732 +0000 UTC m=+0.040821819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:06:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba4b42e8660661a7cc17f2ca545390a27e92fa190ce52fa421e3d337899c051/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:06:15 np0005592157 podman[260355]: 2026-01-22 14:06:15.282958309 +0000 UTC m=+0.154581446 container init ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:06:15 np0005592157 podman[260355]: 2026-01-22 14:06:15.299540123 +0000 UTC m=+0.171163180 container start ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:06:15 np0005592157 podman[260355]: 2026-01-22 14:06:15.303735527 +0000 UTC m=+0.175358614 container attach ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:06:15 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:15 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]: {
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:        "osd_id": 0,
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:        "type": "bluestore"
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]:    }
Jan 22 09:06:16 np0005592157 relaxed_benz[260371]: }
Jan 22 09:06:16 np0005592157 systemd[1]: libpod-ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3.scope: Deactivated successfully.
Jan 22 09:06:16 np0005592157 podman[260355]: 2026-01-22 14:06:16.249913882 +0000 UTC m=+1.121536939 container died ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:06:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fba4b42e8660661a7cc17f2ca545390a27e92fa190ce52fa421e3d337899c051-merged.mount: Deactivated successfully.
Jan 22 09:06:16 np0005592157 podman[260355]: 2026-01-22 14:06:16.318303308 +0000 UTC m=+1.189926355 container remove ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:06:16 np0005592157 systemd[1]: libpod-conmon-ba6158f29219545edf8650248934d2de8650c3c1a042cadc349e56c4e4fde0b3.scope: Deactivated successfully.
Jan 22 09:06:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:06:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:06:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d1e92d96-cd22-4f94-bdff-e560b116e988 does not exist
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d1a66c19-e50f-4baa-969d-79e7911ba8be does not exist
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 44c5bbb5-893e-4e77-9c94-23d2e8c79045 does not exist
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:17 np0005592157 nova_compute[245707]: 2026-01-22 14:06:17.290 245711 DEBUG oslo_concurrency.lockutils [None req-55669f64-8f18-4207-9dca-69477e36f675 2a05477c7ac04831851903bf6cdf8dd0 231b025ece4a4936b6f1b9656712096a - - default default] Acquiring lock "1089392f-9bda-4904-9370-95fc2c3dd7c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:17 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:17.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:18.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:18 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:18 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:06:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:20.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:06:20 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:20 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:21 np0005592157 podman[260457]: 2026-01-22 14:06:21.371646288 +0000 UTC m=+0.088535169 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:06:21 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:21.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:22 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:22 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:24 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:25 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:25 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:26 np0005592157 podman[260506]: 2026-01-22 14:06:26.264051225 +0000 UTC m=+0.113767028 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 09:06:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:28 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:28 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:29 np0005592157 nova_compute[245707]: 2026-01-22 14:06:29.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:29 np0005592157 nova_compute[245707]: 2026-01-22 14:06:29.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:06:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:30 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:31 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592157 nova_compute[245707]: 2026-01-22 14:06:31.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:32.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:32 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:32 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:33 np0005592157 nova_compute[245707]: 2026-01-22 14:06:33.240 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592157 nova_compute[245707]: 2026-01-22 14:06:33.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592157 nova_compute[245707]: 2026-01-22 14:06:33.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592157 nova_compute[245707]: 2026-01-22 14:06:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:34 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.353 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.354 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.354 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.355 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.355 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:06:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:34.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:06:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190855873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:06:34 np0005592157 nova_compute[245707]: 2026-01-22 14:06:34.867 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.135 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.137 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.137 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.137 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.348 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.349 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.349 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.349 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:06:35 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:35 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.451 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:06:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:06:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478780440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.895 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:06:35 np0005592157 nova_compute[245707]: 2026-01-22 14:06:35.903 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:06:36 np0005592157 nova_compute[245707]: 2026-01-22 14:06:36.120 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:06:36 np0005592157 nova_compute[245707]: 2026-01-22 14:06:36.312 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:06:36 np0005592157 nova_compute[245707]: 2026-01-22 14:06:36.313 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:36.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:36 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.314 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.315 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.315 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.376 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.376 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.377 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.377 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592157 nova_compute[245707]: 2026-01-22 14:06:37.377 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:37 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:38 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:39 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:40.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:40 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:41.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:42 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:06:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:43.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:06:43 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:44.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:45 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:46 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:06:47
Jan 22 09:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root']
Jan 22 09:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:06:47 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:06:47.574 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:06:47.575 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:06:47.575 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:47.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:06:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:06:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:48 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:48 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:49.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:49 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:50.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:50 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:50 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:52 np0005592157 podman[260664]: 2026-01-22 14:06:52.354556896 +0000 UTC m=+0.079386261 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:06:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:52.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:52 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:53 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:54.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:54 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:55.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:55 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:56.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:06:56 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:57 np0005592157 podman[260686]: 2026-01-22 14:06:57.400533102 +0000 UTC m=+0.128268689 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:06:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:06:58 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:58 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:58.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 09:06:59 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:06:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:06:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:59.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:00 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:00 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 09:07:01 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:02.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 09:07:02 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:02 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:07:03 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:07:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:04.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 87 op/s
Jan 22 09:07:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:04 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:04 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:05 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:06.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 09:07:07 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:07.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:08 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 09:07:09 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:09.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 22 09:07:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 22 09:07:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 22 09:07:10 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:10 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:11 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:11.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:12.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 09:07:12 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:14.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 09:07:14 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:15 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:15 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:15.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:16.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:16 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:07:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:07:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fed72b43-bd98-4262-a746-a78438c127c9 does not exist
Jan 22 09:07:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0059092c-ee64-4d95-8a0e-10d0b1828e7a does not exist
Jan 22 09:07:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 07fd252c-a5d2-4960-903e-906d9441407f does not exist
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:07:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:21 np0005592157 podman[261049]: 2026-01-22 14:07:21.944205537 +0000 UTC m=+0.045651599 container create 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:07:21 np0005592157 systemd[1]: Started libpod-conmon-48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5.scope.
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:21.922599438 +0000 UTC m=+0.024045520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:22.051869242 +0000 UTC m=+0.153315334 container init 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:22.060155129 +0000 UTC m=+0.161601191 container start 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:22.063889932 +0000 UTC m=+0.165336044 container attach 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:07:22 np0005592157 priceless_dijkstra[261065]: 167 167
Jan 22 09:07:22 np0005592157 systemd[1]: libpod-48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5.scope: Deactivated successfully.
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:22.068525928 +0000 UTC m=+0.169972020 container died 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:07:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-da2e93b8afed271e03c1d336de03070549c7fdd0923d16bca9e2d31066ebb261-merged.mount: Deactivated successfully.
Jan 22 09:07:22 np0005592157 podman[261049]: 2026-01-22 14:07:22.114502714 +0000 UTC m=+0.215948816 container remove 48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:07:22 np0005592157 systemd[1]: libpod-conmon-48a0c5b8edc520506fd2b1549f112acb5ff90e811f086edd9384c8192c92a7e5.scope: Deactivated successfully.
Jan 22 09:07:22 np0005592157 podman[261089]: 2026-01-22 14:07:22.349871654 +0000 UTC m=+0.082849447 container create c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:07:22 np0005592157 podman[261089]: 2026-01-22 14:07:22.298216146 +0000 UTC m=+0.031193999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:22 np0005592157 systemd[1]: Started libpod-conmon-c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0.scope.
Jan 22 09:07:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:22 np0005592157 podman[261089]: 2026-01-22 14:07:22.518372877 +0000 UTC m=+0.251350670 container init c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:07:22 np0005592157 podman[261089]: 2026-01-22 14:07:22.532196181 +0000 UTC m=+0.265173954 container start c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:22 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:22 np0005592157 podman[261089]: 2026-01-22 14:07:22.586961947 +0000 UTC m=+0.319939690 container attach c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:07:22 np0005592157 podman[261106]: 2026-01-22 14:07:22.705025401 +0000 UTC m=+0.276489116 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 09:07:23 np0005592157 musing_chatelet[261104]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:07:23 np0005592157 musing_chatelet[261104]: --> relative data size: 1.0
Jan 22 09:07:23 np0005592157 musing_chatelet[261104]: --> All data devices are unavailable
Jan 22 09:07:23 np0005592157 systemd[1]: libpod-c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0.scope: Deactivated successfully.
Jan 22 09:07:23 np0005592157 podman[261089]: 2026-01-22 14:07:23.467333801 +0000 UTC m=+1.200311584 container died c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:07:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-71ec1ebeb00803f194471d2258087aa723053002d176c36f3ae9ea741566dc8d-merged.mount: Deactivated successfully.
Jan 22 09:07:23 np0005592157 podman[261089]: 2026-01-22 14:07:23.550225159 +0000 UTC m=+1.283202922 container remove c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:07:23 np0005592157 systemd[1]: libpod-conmon-c6269ba9b6d64e14aebe8ffbe32e55e178df134007335bd8c2e036e7a3b89eb0.scope: Deactivated successfully.
Jan 22 09:07:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.399345783 +0000 UTC m=+0.094473857 container create 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:24 np0005592157 systemd[1]: Started libpod-conmon-2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be.scope.
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.372413311 +0000 UTC m=+0.067541395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.498595278 +0000 UTC m=+0.193723332 container init 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.506435844 +0000 UTC m=+0.201563878 container start 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.510964767 +0000 UTC m=+0.206092801 container attach 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:07:24 np0005592157 nice_kowalevski[261311]: 167 167
Jan 22 09:07:24 np0005592157 systemd[1]: libpod-2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be.scope: Deactivated successfully.
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.515800287 +0000 UTC m=+0.210928321 container died 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7d1a72f6bb82e0ded99a33b504b6c82e901ce9ad7e657b593bf4b8f06da508ba-merged.mount: Deactivated successfully.
Jan 22 09:07:24 np0005592157 podman[261295]: 2026-01-22 14:07:24.555766574 +0000 UTC m=+0.250894608 container remove 2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:24 np0005592157 systemd[1]: libpod-conmon-2568f9e2a021b8f0cddc6c9afecd944967de650e432e10e9addb6b8f0ff5d9be.scope: Deactivated successfully.
Jan 22 09:07:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:24 np0005592157 podman[261336]: 2026-01-22 14:07:24.760596152 +0000 UTC m=+0.058792857 container create d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:07:24 np0005592157 systemd[1]: Started libpod-conmon-d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6.scope.
Jan 22 09:07:24 np0005592157 podman[261336]: 2026-01-22 14:07:24.733467075 +0000 UTC m=+0.031663830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3accd2700aa2ea27eed356a45ba98c868f2072593975e4ec20fab25ba23f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3accd2700aa2ea27eed356a45ba98c868f2072593975e4ec20fab25ba23f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3accd2700aa2ea27eed356a45ba98c868f2072593975e4ec20fab25ba23f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3accd2700aa2ea27eed356a45ba98c868f2072593975e4ec20fab25ba23f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:24 np0005592157 podman[261336]: 2026-01-22 14:07:24.868122634 +0000 UTC m=+0.166319369 container init d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:24 np0005592157 podman[261336]: 2026-01-22 14:07:24.877331663 +0000 UTC m=+0.175528318 container start d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:24 np0005592157 podman[261336]: 2026-01-22 14:07:24.882441591 +0000 UTC m=+0.180638336 container attach d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]: {
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:    "0": [
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:        {
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "devices": [
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "/dev/loop3"
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            ],
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "lv_name": "ceph_lv0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "lv_size": "7511998464",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "name": "ceph_lv0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "tags": {
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.cluster_name": "ceph",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.crush_device_class": "",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.encrypted": "0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.osd_id": "0",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.type": "block",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:                "ceph.vdo": "0"
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            },
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "type": "block",
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:            "vg_name": "ceph_vg0"
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:        }
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]:    ]
Jan 22 09:07:25 np0005592157 romantic_perlman[261352]: }
Jan 22 09:07:25 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:25 np0005592157 systemd[1]: libpod-d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6.scope: Deactivated successfully.
Jan 22 09:07:25 np0005592157 podman[261336]: 2026-01-22 14:07:25.666881463 +0000 UTC m=+0.965078168 container died d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ed3accd2700aa2ea27eed356a45ba98c868f2072593975e4ec20fab25ba23f81-merged.mount: Deactivated successfully.
Jan 22 09:07:25 np0005592157 podman[261336]: 2026-01-22 14:07:25.74336562 +0000 UTC m=+1.041562295 container remove d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:25 np0005592157 systemd[1]: libpod-conmon-d1f46dfcd5cfdd2c364ae4b1593e1eb73cf33f281061345a71f069b6466404d6.scope: Deactivated successfully.
Jan 22 09:07:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.579452151 +0000 UTC m=+0.072906239 container create 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:07:26 np0005592157 systemd[1]: Started libpod-conmon-3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638.scope.
Jan 22 09:07:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.553094393 +0000 UTC m=+0.046548511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.683986838 +0000 UTC m=+0.177440916 container init 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.691833473 +0000 UTC m=+0.185287521 container start 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.695421723 +0000 UTC m=+0.188875801 container attach 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:07:26 np0005592157 xenodochial_proskuriakova[261551]: 167 167
Jan 22 09:07:26 np0005592157 systemd[1]: libpod-3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638.scope: Deactivated successfully.
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.697762811 +0000 UTC m=+0.191216859 container died 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:07:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-576c0cd38de2587251a089587b46dea80452a730ee27585afdd57da78815576a-merged.mount: Deactivated successfully.
Jan 22 09:07:26 np0005592157 podman[261513]: 2026-01-22 14:07:26.745376079 +0000 UTC m=+0.238830117 container remove 3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:07:26 np0005592157 systemd[1]: libpod-conmon-3ae9ac5cfdb622ab43c655bdf6d4d4205497caa6b09594b6deead186de5af638.scope: Deactivated successfully.
Jan 22 09:07:26 np0005592157 podman[261603]: 2026-01-22 14:07:26.948020222 +0000 UTC m=+0.045807293 container create 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:07:27 np0005592157 systemd[1]: Started libpod-conmon-3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38.scope.
Jan 22 09:07:27 np0005592157 podman[261603]: 2026-01-22 14:07:26.929761227 +0000 UTC m=+0.027548328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:07:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:07:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b6e392ff94ab523360a529dbd89c4e4e1d059ad59ae07147afc63ae581ca7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b6e392ff94ab523360a529dbd89c4e4e1d059ad59ae07147afc63ae581ca7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b6e392ff94ab523360a529dbd89c4e4e1d059ad59ae07147afc63ae581ca7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b6e392ff94ab523360a529dbd89c4e4e1d059ad59ae07147afc63ae581ca7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:07:27 np0005592157 podman[261603]: 2026-01-22 14:07:27.052225501 +0000 UTC m=+0.150012632 container init 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:07:27 np0005592157 podman[261603]: 2026-01-22 14:07:27.064770614 +0000 UTC m=+0.162557695 container start 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:07:27 np0005592157 podman[261603]: 2026-01-22 14:07:27.069304947 +0000 UTC m=+0.167092038 container attach 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:07:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:27.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]: {
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:        "osd_id": 0,
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:        "type": "bluestore"
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]:    }
Jan 22 09:07:27 np0005592157 vigorous_hermann[261619]: }
Jan 22 09:07:28 np0005592157 systemd[1]: libpod-3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38.scope: Deactivated successfully.
Jan 22 09:07:28 np0005592157 podman[261603]: 2026-01-22 14:07:28.010544459 +0000 UTC m=+1.108331560 container died 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:07:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-03b6e392ff94ab523360a529dbd89c4e4e1d059ad59ae07147afc63ae581ca7f-merged.mount: Deactivated successfully.
Jan 22 09:07:28 np0005592157 podman[261603]: 2026-01-22 14:07:28.08596306 +0000 UTC m=+1.183750151 container remove 3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:07:28 np0005592157 systemd[1]: libpod-conmon-3b2cb99bf4bee36da450e53a28d046c0197cc6b1b8bcea13b0eebc3eff8d7d38.scope: Deactivated successfully.
Jan 22 09:07:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:07:29 np0005592157 podman[261643]: 2026-01-22 14:07:28.218897155 +0000 UTC m=+0.157068498 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:07:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:29.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 52bcf251-efea-4fe8-9a5f-71aed4cfe65f does not exist
Jan 22 09:07:29 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0f53c2c0-9174-4d7e-a7a8-db7c22e503b0 does not exist
Jan 22 09:07:29 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bcf54508-b011-4b75-9c13-a94a81af142d does not exist
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:29.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:30 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:31 np0005592157 nova_compute[245707]: 2026-01-22 14:07:31.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:31 np0005592157 nova_compute[245707]: 2026-01-22 14:07:31.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:31 np0005592157 nova_compute[245707]: 2026-01-22 14:07:31.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:07:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.314 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.315 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.316 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.316 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.317 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:07:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:33.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:07:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113320093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:07:33 np0005592157 nova_compute[245707]: 2026-01-22 14:07:33.856 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:07:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.049 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.051 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.051 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.051 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.158 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.159 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.159 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.159 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.230 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438682208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.702 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.710 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.736 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.738 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:07:34 np0005592157 nova_compute[245707]: 2026-01-22 14:07:34.739 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:07:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:07:35 np0005592157 nova_compute[245707]: 2026-01-22 14:07:35.735 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:35 np0005592157 nova_compute[245707]: 2026-01-22 14:07:35.736 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:35 np0005592157 nova_compute[245707]: 2026-01-22 14:07:35.775 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:35 np0005592157 nova_compute[245707]: 2026-01-22 14:07:35.775 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:07:35 np0005592157 nova_compute[245707]: 2026-01-22 14:07:35.776 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:07:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:35.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.081 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.081 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.082 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.083 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.083 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:36 np0005592157 nova_compute[245707]: 2026-01-22 14:07:36.083 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:37.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:37.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:38 np0005592157 nova_compute[245707]: 2026-01-22 14:07:38.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:39.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:40 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:41.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:41 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:42 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:43.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:43.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:44 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:44 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:45 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:07:46 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:07:47
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'backups']
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:07:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:47.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:07:47.575 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:07:47.579 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:07:47.579 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:47.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:49.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:50 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:50 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:51 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:52 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:53 np0005592157 podman[261837]: 2026-01-22 14:07:53.363062827 +0000 UTC m=+0.086231372 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 09:07:53 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:07:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:07:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:53.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:54 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:55.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:55.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:56 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:57.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:57.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:58 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:07:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:59.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:07:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:07:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:07:59 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:00 np0005592157 podman[261860]: 2026-01-22 14:08:00.429831706 +0000 UTC m=+0.160544175 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:08:00 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:01.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:01.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:03.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:03 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:03.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:08:04 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:05.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:05 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:05.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:06 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:07.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:09.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:09.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:11 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:11 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:11.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:11.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:13 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:13.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:17.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:17 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:17.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:19.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:19.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:21.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:23.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:24 np0005592157 podman[261950]: 2026-01-22 14:08:24.20409097 +0000 UTC m=+0.081412511 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:08:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:25 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:25.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:25.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:27.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:08:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:08:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:29 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:30.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:30 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:30 np0005592157 podman[262121]: 2026-01-22 14:08:30.638537628 +0000 UTC m=+0.132235130 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:08:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:31 np0005592157 nova_compute[245707]: 2026-01-22 14:08:31.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:31 np0005592157 nova_compute[245707]: 2026-01-22 14:08:31.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:31 np0005592157 nova_compute[245707]: 2026-01-22 14:08:31.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:08:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:32.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 73d16288-a5b5-4d1d-addb-f00c06547926 does not exist
Jan 22 09:08:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 89b99849-32ad-4d4d-ab1b-530c004ad506 does not exist
Jan 22 09:08:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3ef8f9d0-f117-4d50-898f-3a5f84e1bf2d does not exist
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:08:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:08:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:08:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.074804714 +0000 UTC m=+0.062000718 container create 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:08:34 np0005592157 systemd[1]: Started libpod-conmon-9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75.scope.
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.043301738 +0000 UTC m=+0.030497852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:34.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.199014652 +0000 UTC m=+0.186210656 container init 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.20734933 +0000 UTC m=+0.194545334 container start 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.212300554 +0000 UTC m=+0.199496578 container attach 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:08:34 np0005592157 lucid_darwin[262337]: 167 167
Jan 22 09:08:34 np0005592157 systemd[1]: libpod-9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75.scope: Deactivated successfully.
Jan 22 09:08:34 np0005592157 conmon[262337]: conmon 9f30fa1c081c04337e1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75.scope/container/memory.events
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.218300993 +0000 UTC m=+0.205496997 container died 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:08:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7bae8cc484120f9ce1f6b8edf33f2949a6c03eb69d95adae83563615f4c2a9b0-merged.mount: Deactivated successfully.
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:34 np0005592157 podman[262321]: 2026-01-22 14:08:34.269141401 +0000 UTC m=+0.256337405 container remove 9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.268 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.269 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.269 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.269 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.270 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:08:34 np0005592157 systemd[1]: libpod-conmon-9f30fa1c081c04337e1e9b8f106bf1b7aab77d4dd87c85ff04f83a4596ed2c75.scope: Deactivated successfully.
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:08:34 np0005592157 podman[262364]: 2026-01-22 14:08:34.448105796 +0000 UTC m=+0.057551037 container create 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:34 np0005592157 podman[262364]: 2026-01-22 14:08:34.423850041 +0000 UTC m=+0.033295312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:34 np0005592157 systemd[1]: Started libpod-conmon-5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e.scope.
Jan 22 09:08:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:34 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:34 np0005592157 podman[262364]: 2026-01-22 14:08:34.589983385 +0000 UTC m=+0.199428606 container init 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:34 np0005592157 podman[262364]: 2026-01-22 14:08:34.602985899 +0000 UTC m=+0.212431120 container start 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:08:34 np0005592157 podman[262364]: 2026-01-22 14:08:34.609953013 +0000 UTC m=+0.219398264 container attach 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084370533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.679 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.865 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.867 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.867 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.868 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.974 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.974 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.975 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:08:34 np0005592157 nova_compute[245707]: 2026-01-22 14:08:34.975 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.067 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:08:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:35 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:35 np0005592157 jovial_boyd[262398]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:08:35 np0005592157 jovial_boyd[262398]: --> relative data size: 1.0
Jan 22 09:08:35 np0005592157 jovial_boyd[262398]: --> All data devices are unavailable
Jan 22 09:08:35 np0005592157 systemd[1]: libpod-5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e.scope: Deactivated successfully.
Jan 22 09:08:35 np0005592157 podman[262364]: 2026-01-22 14:08:35.446798167 +0000 UTC m=+1.056243408 container died 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:08:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4fdde847a0ada6114e7f482127893e32c48ef8ced5120758031ac2898e8660d1-merged.mount: Deactivated successfully.
Jan 22 09:08:35 np0005592157 podman[262364]: 2026-01-22 14:08:35.501949403 +0000 UTC m=+1.111394634 container remove 5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:08:35 np0005592157 systemd[1]: libpod-conmon-5cf2de33411a055f789c159da423d3206ee7bb117a5a6a121d546ee9923d6c6e.scope: Deactivated successfully.
Jan 22 09:08:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:08:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271794858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.543 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.551 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.568 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.569 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:08:35 np0005592157 nova_compute[245707]: 2026-01-22 14:08:35.570 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.090838942 +0000 UTC m=+0.046736126 container create e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:08:36 np0005592157 systemd[1]: Started libpod-conmon-e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6.scope.
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.068466394 +0000 UTC m=+0.024363658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:36.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.199494603 +0000 UTC m=+0.155391837 container init e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.207268397 +0000 UTC m=+0.163165611 container start e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.211727168 +0000 UTC m=+0.167624432 container attach e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:08:36 np0005592157 gracious_herschel[262608]: 167 167
Jan 22 09:08:36 np0005592157 systemd[1]: libpod-e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6.scope: Deactivated successfully.
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.215062171 +0000 UTC m=+0.170959385 container died e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b966e3bf6a08d1827b4d7d36c84e92c0f450f65fe3fd085f19c1bcce8d6c8ee6-merged.mount: Deactivated successfully.
Jan 22 09:08:36 np0005592157 podman[262592]: 2026-01-22 14:08:36.263236973 +0000 UTC m=+0.219134197 container remove e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:08:36 np0005592157 systemd[1]: libpod-conmon-e729ffe4b22638a6fc98842231e0aa812cc36d2f335211f05f30b7a567e308e6.scope: Deactivated successfully.
Jan 22 09:08:36 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:36 np0005592157 podman[262629]: 2026-01-22 14:08:36.472556924 +0000 UTC m=+0.050557532 container create 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:08:36 np0005592157 systemd[1]: Started libpod-conmon-0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7.scope.
Jan 22 09:08:36 np0005592157 podman[262629]: 2026-01-22 14:08:36.452314199 +0000 UTC m=+0.030314807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce5cf0165868e0f7f7d2e463b46c062a7dca061dc4450c64314229434e2da33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce5cf0165868e0f7f7d2e463b46c062a7dca061dc4450c64314229434e2da33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce5cf0165868e0f7f7d2e463b46c062a7dca061dc4450c64314229434e2da33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce5cf0165868e0f7f7d2e463b46c062a7dca061dc4450c64314229434e2da33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:36 np0005592157 nova_compute[245707]: 2026-01-22 14:08:36.568 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:36 np0005592157 nova_compute[245707]: 2026-01-22 14:08:36.568 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:36 np0005592157 nova_compute[245707]: 2026-01-22 14:08:36.569 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:36 np0005592157 podman[262629]: 2026-01-22 14:08:36.576696032 +0000 UTC m=+0.154696630 container init 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:08:36 np0005592157 podman[262629]: 2026-01-22 14:08:36.588756553 +0000 UTC m=+0.166757121 container start 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:08:36 np0005592157 podman[262629]: 2026-01-22 14:08:36.596197569 +0000 UTC m=+0.174198167 container attach 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.264 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.265 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:08:37 np0005592157 nova_compute[245707]: 2026-01-22 14:08:37.266 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:08:37 np0005592157 objective_fermi[262645]: {
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:    "0": [
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:        {
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "devices": [
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "/dev/loop3"
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            ],
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "lv_name": "ceph_lv0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "lv_size": "7511998464",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "name": "ceph_lv0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "tags": {
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.cluster_name": "ceph",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.crush_device_class": "",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.encrypted": "0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.osd_id": "0",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.type": "block",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:                "ceph.vdo": "0"
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            },
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "type": "block",
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:            "vg_name": "ceph_vg0"
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:        }
Jan 22 09:08:37 np0005592157 objective_fermi[262645]:    ]
Jan 22 09:08:37 np0005592157 objective_fermi[262645]: }
Jan 22 09:08:37 np0005592157 systemd[1]: libpod-0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7.scope: Deactivated successfully.
Jan 22 09:08:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:37 np0005592157 podman[262655]: 2026-01-22 14:08:37.4080514 +0000 UTC m=+0.034736637 container died 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:08:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9ce5cf0165868e0f7f7d2e463b46c062a7dca061dc4450c64314229434e2da33-merged.mount: Deactivated successfully.
Jan 22 09:08:37 np0005592157 podman[262655]: 2026-01-22 14:08:37.461115754 +0000 UTC m=+0.087800951 container remove 0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:08:37 np0005592157 systemd[1]: libpod-conmon-0e12632db423069bec2b79bb63ee9d9d6970cd70f3a915edb767a3eb392db6d7.scope: Deactivated successfully.
Jan 22 09:08:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:08:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:38.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.217106382 +0000 UTC m=+0.051649020 container create 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 09:08:38 np0005592157 systemd[1]: Started libpod-conmon-3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543.scope.
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.190883018 +0000 UTC m=+0.025425706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.322188563 +0000 UTC m=+0.156731251 container init 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.333448014 +0000 UTC m=+0.167990612 container start 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.337064974 +0000 UTC m=+0.171607662 container attach 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:08:38 np0005592157 festive_colden[262828]: 167 167
Jan 22 09:08:38 np0005592157 systemd[1]: libpod-3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543.scope: Deactivated successfully.
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.342436848 +0000 UTC m=+0.176979496 container died 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:08:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-281f75cd7c0c73a163f9e5ccd0fc74025be52d03bf36ef9c118acffa298505a1-merged.mount: Deactivated successfully.
Jan 22 09:08:38 np0005592157 podman[262811]: 2026-01-22 14:08:38.401785849 +0000 UTC m=+0.236328487 container remove 3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:08:38 np0005592157 systemd[1]: libpod-conmon-3ca68c44f0a9bb77e364b4fdc42dda755701989c8660af7ebd3ccb87b5764543.scope: Deactivated successfully.
Jan 22 09:08:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:38 np0005592157 podman[262850]: 2026-01-22 14:08:38.590308411 +0000 UTC m=+0.064845348 container create 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:08:38 np0005592157 systemd[1]: Started libpod-conmon-4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f.scope.
Jan 22 09:08:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:08:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ac5e1375ff207c55bfa490f026f33dbe76db6fd0a61ba81c928f6e45039560/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:38 np0005592157 podman[262850]: 2026-01-22 14:08:38.563035861 +0000 UTC m=+0.037572858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ac5e1375ff207c55bfa490f026f33dbe76db6fd0a61ba81c928f6e45039560/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ac5e1375ff207c55bfa490f026f33dbe76db6fd0a61ba81c928f6e45039560/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ac5e1375ff207c55bfa490f026f33dbe76db6fd0a61ba81c928f6e45039560/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:38 np0005592157 podman[262850]: 2026-01-22 14:08:38.671125717 +0000 UTC m=+0.145662734 container init 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:08:38 np0005592157 podman[262850]: 2026-01-22 14:08:38.685360922 +0000 UTC m=+0.159897859 container start 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:38 np0005592157 podman[262850]: 2026-01-22 14:08:38.68966156 +0000 UTC m=+0.164198527 container attach 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:08:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]: {
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:        "osd_id": 0,
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:        "type": "bluestore"
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]:    }
Jan 22 09:08:39 np0005592157 elastic_yonath[262866]: }
Jan 22 09:08:39 np0005592157 systemd[1]: libpod-4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f.scope: Deactivated successfully.
Jan 22 09:08:39 np0005592157 podman[262850]: 2026-01-22 14:08:39.565599429 +0000 UTC m=+1.040136356 container died 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:08:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-32ac5e1375ff207c55bfa490f026f33dbe76db6fd0a61ba81c928f6e45039560-merged.mount: Deactivated successfully.
Jan 22 09:08:39 np0005592157 podman[262850]: 2026-01-22 14:08:39.626357365 +0000 UTC m=+1.100894292 container remove 4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yonath, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:08:39 np0005592157 systemd[1]: libpod-conmon-4d2ddb672b93d265eedde60ef4b899ba3d15e13ba5bbf5ec792e53c2cd98534f.scope: Deactivated successfully.
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:39.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:39.969445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919969513, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2325, "num_deletes": 510, "total_data_size": 3095878, "memory_usage": 3165600, "flush_reason": "Manual Compaction"}
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 22 09:08:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c52294e5-a6df-44f7-b034-4578119ab05e does not exist
Jan 22 09:08:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ce5a80b9-11d3-4235-9e67-df0c572a58b2 does not exist
Jan 22 09:08:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d799fc96-c539-4709-aaff-841792de92a7 does not exist
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920000432, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2305866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30466, "largest_seqno": 32789, "table_properties": {"data_size": 2297650, "index_size": 4006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26045, "raw_average_key_size": 20, "raw_value_size": 2276708, "raw_average_value_size": 1819, "num_data_blocks": 172, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090766, "oldest_key_time": 1769090766, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 31088 microseconds, and 11032 cpu microseconds.
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.000528) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2305866 bytes OK
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.000568) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.002899) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.002954) EVENT_LOG_v1 {"time_micros": 1769090920002917, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.002985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3084985, prev total WAL file size 3126069, number of live WAL files 2.
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.004829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2251KB)], [65(10117KB)]
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920004905, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12666042, "oldest_snapshot_seqno": -1}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 7227 keys, 9297572 bytes, temperature: kUnknown
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920103796, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 9297572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9254083, "index_size": 24305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 191586, "raw_average_key_size": 26, "raw_value_size": 9126952, "raw_average_value_size": 1262, "num_data_blocks": 948, "num_entries": 7227, "num_filter_entries": 7227, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090920, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.104237) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9297572 bytes
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.105720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.8 rd, 93.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.0) OK, records in: 8220, records dropped: 993 output_compression: NoCompression
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.105749) EVENT_LOG_v1 {"time_micros": 1769090920105736, "job": 36, "event": "compaction_finished", "compaction_time_micros": 99100, "compaction_time_cpu_micros": 46020, "output_level": 6, "num_output_files": 1, "total_output_size": 9297572, "num_input_records": 8220, "num_output_records": 7227, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920106735, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920109483, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.004736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.109655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.109671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.109673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.109675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:40.109676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:40 np0005592157 nova_compute[245707]: 2026-01-22 14:08:40.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:41 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:42 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:44.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:44 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:45 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:45 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:45.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:08:46 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:08:47
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr']
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:08:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:08:47.577 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:08:47.578 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:08:47.578 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:47.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:48 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:49.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:50 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:50 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:50.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:51 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:51.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:52 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.121273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933121334, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 412, "num_deletes": 251, "total_data_size": 271899, "memory_usage": 280584, "flush_reason": "Manual Compaction"}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933127434, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 268216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32790, "largest_seqno": 33201, "table_properties": {"data_size": 265866, "index_size": 450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6293, "raw_average_key_size": 19, "raw_value_size": 260953, "raw_average_value_size": 795, "num_data_blocks": 20, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090919, "oldest_key_time": 1769090919, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 6227 microseconds, and 2824 cpu microseconds.
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127502) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 268216 bytes OK
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127533) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.130373) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.130396) EVENT_LOG_v1 {"time_micros": 1769090933130389, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.130428) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 269304, prev total WAL file size 269304, number of live WAL files 2.
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.131049) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(261KB)], [68(9079KB)]
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933131100, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9565788, "oldest_snapshot_seqno": -1}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 7043 keys, 7847163 bytes, temperature: kUnknown
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933212393, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 7847163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806079, "index_size": 22348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 188551, "raw_average_key_size": 26, "raw_value_size": 7683160, "raw_average_value_size": 1090, "num_data_blocks": 861, "num_entries": 7043, "num_filter_entries": 7043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.212798) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 7847163 bytes
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.215621) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.4 rd, 96.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.9) write-amplify(29.3) OK, records in: 7555, records dropped: 512 output_compression: NoCompression
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.215655) EVENT_LOG_v1 {"time_micros": 1769090933215640, "job": 38, "event": "compaction_finished", "compaction_time_micros": 81456, "compaction_time_cpu_micros": 29011, "output_level": 6, "num_output_files": 1, "total_output_size": 7847163, "num_input_records": 7555, "num_output_records": 7043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933216189, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933219683, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.130906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.219914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.219962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.219966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.219969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:08:53.219972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:53.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:54 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:54.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:54 np0005592157 podman[263011]: 2026-01-22 14:08:54.365260001 +0000 UTC m=+0.089527375 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 09:08:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:55 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:08:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:55.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:08:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:56.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:56 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:57.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:58.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:58 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:08:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:08:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:08:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:59.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:08:59 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:00.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:00 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:01 np0005592157 podman[263033]: 2026-01-22 14:09:01.407736082 +0000 UTC m=+0.134681371 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:09:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:09:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:01.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:09:01 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:03.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:09:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:04 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:05 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:05.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:06.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:06 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:07 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:08.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:09 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:09.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:10.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:10 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:10 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:11 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:11.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:12.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:13 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:15 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:16.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:17 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:18.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:19.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:20 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:22.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:22 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:23.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:24.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:25 np0005592157 podman[263122]: 2026-01-22 14:09:25.342784513 +0000 UTC m=+0.070142201 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:09:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:25.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:26.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:28.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:29 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:30 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:31 np0005592157 nova_compute[245707]: 2026-01-22 14:09:31.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:31 np0005592157 nova_compute[245707]: 2026-01-22 14:09:31.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:09:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:32 np0005592157 nova_compute[245707]: 2026-01-22 14:09:32.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:32.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:32 np0005592157 podman[263195]: 2026-01-22 14:09:32.368293612 +0000 UTC m=+0.105487343 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:09:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:33.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:34 np0005592157 nova_compute[245707]: 2026-01-22 14:09:34.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:34 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.285 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.286 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.286 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.286 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.287 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:09:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:35 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:09:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613868033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.767 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.936 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.938 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.938 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:35 np0005592157 nova_compute[245707]: 2026-01-22 14:09:35.939 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:35.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.280 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.281 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.281 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.281 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.343 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:09:36 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:09:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472558186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.835 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.841 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.961 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.962 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:09:36 np0005592157 nova_compute[245707]: 2026-01-22 14:09:36.963 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:37.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:38 np0005592157 nova_compute[245707]: 2026-01-22 14:09:38.958 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:38 np0005592157 nova_compute[245707]: 2026-01-22 14:09:38.959 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.001 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.001 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.001 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.018 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.019 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.019 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.019 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:39 np0005592157 nova_compute[245707]: 2026-01-22 14:09:39.019 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:40 np0005592157 nova_compute[245707]: 2026-01-22 14:09:40.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:40 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ba024779-232a-4758-932c-a6561dee2381 does not exist
Jan 22 09:09:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 74f57705-b629-40ed-9bb7-81b6e941e114 does not exist
Jan 22 09:09:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4dd81c6b-bb0b-4579-93e7-db116474438d does not exist
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:09:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.063722301 +0000 UTC m=+0.068854019 container create 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:09:42 np0005592157 systemd[1]: Started libpod-conmon-40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef.scope.
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.038312797 +0000 UTC m=+0.043444525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.176651818 +0000 UTC m=+0.181783556 container init 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.189538909 +0000 UTC m=+0.194670597 container start 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.194343399 +0000 UTC m=+0.199475177 container attach 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:42 np0005592157 dazzling_sutherland[263555]: 167 167
Jan 22 09:09:42 np0005592157 systemd[1]: libpod-40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef.scope: Deactivated successfully.
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.200324208 +0000 UTC m=+0.205455896 container died 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 22 09:09:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4646db1652f5f75d344861ebf5444162cf455feb379965fc13d489f5ef2f64fb-merged.mount: Deactivated successfully.
Jan 22 09:09:42 np0005592157 podman[263539]: 2026-01-22 14:09:42.246525971 +0000 UTC m=+0.251657659 container remove 40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:09:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:42 np0005592157 systemd[1]: libpod-conmon-40d11f9b6ada130ad39deb8b202e9120bfc70037b58765f8f0f5253712ddcfef.scope: Deactivated successfully.
Jan 22 09:09:42 np0005592157 podman[263579]: 2026-01-22 14:09:42.502848435 +0000 UTC m=+0.072317665 container create 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:09:42 np0005592157 systemd[1]: Started libpod-conmon-3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad.scope.
Jan 22 09:09:42 np0005592157 podman[263579]: 2026-01-22 14:09:42.471988545 +0000 UTC m=+0.041457835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:42 np0005592157 podman[263579]: 2026-01-22 14:09:42.584754408 +0000 UTC m=+0.154223618 container init 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:42 np0005592157 podman[263579]: 2026-01-22 14:09:42.598801278 +0000 UTC m=+0.168270508 container start 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:09:42 np0005592157 podman[263579]: 2026-01-22 14:09:42.603396423 +0000 UTC m=+0.172865643 container attach 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:09:42 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:43 np0005592157 gifted_bhabha[263596]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:09:43 np0005592157 gifted_bhabha[263596]: --> relative data size: 1.0
Jan 22 09:09:43 np0005592157 gifted_bhabha[263596]: --> All data devices are unavailable
Jan 22 09:09:43 np0005592157 systemd[1]: libpod-3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad.scope: Deactivated successfully.
Jan 22 09:09:43 np0005592157 podman[263579]: 2026-01-22 14:09:43.499092545 +0000 UTC m=+1.068561785 container died 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8702adc950af6c9ad705988629f0886c8dcf620b6d76489bdd2c3f787048c850-merged.mount: Deactivated successfully.
Jan 22 09:09:43 np0005592157 podman[263579]: 2026-01-22 14:09:43.594658419 +0000 UTC m=+1.164127649 container remove 3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:09:43 np0005592157 systemd[1]: libpod-conmon-3b31bdafa486556938e342f416971c9c1a676c1fd3d9ce60ed41f6744c7623ad.scope: Deactivated successfully.
Jan 22 09:09:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:43.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:44 np0005592157 podman[263765]: 2026-01-22 14:09:44.43416544 +0000 UTC m=+0.044456480 container create fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:09:44 np0005592157 systemd[1]: Started libpod-conmon-fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672.scope.
Jan 22 09:09:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:44 np0005592157 podman[263765]: 2026-01-22 14:09:44.415507115 +0000 UTC m=+0.025798175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:44 np0005592157 podman[263765]: 2026-01-22 14:09:44.527696503 +0000 UTC m=+0.137987553 container init fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:09:44 np0005592157 podman[263765]: 2026-01-22 14:09:44.538771839 +0000 UTC m=+0.149062889 container start fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:09:44 np0005592157 podman[263765]: 2026-01-22 14:09:44.542547004 +0000 UTC m=+0.152838034 container attach fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:09:44 np0005592157 determined_davinci[263781]: 167 167
Jan 22 09:09:44 np0005592157 systemd[1]: libpod-fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672.scope: Deactivated successfully.
Jan 22 09:09:44 np0005592157 podman[263786]: 2026-01-22 14:09:44.612892798 +0000 UTC m=+0.042664875 container died fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:09:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e433c44a4abc798044e7f7b3fadac241c24b2ba7ef77e4917ce09801fb444ccf-merged.mount: Deactivated successfully.
Jan 22 09:09:44 np0005592157 podman[263786]: 2026-01-22 14:09:44.662913836 +0000 UTC m=+0.092685893 container remove fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:09:44 np0005592157 systemd[1]: libpod-conmon-fdaeb35772e119c6e49b4edc1b28e4f356bde14b5e727f304502a387e1e98672.scope: Deactivated successfully.
Jan 22 09:09:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:44 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:44 np0005592157 podman[263809]: 2026-01-22 14:09:44.963067494 +0000 UTC m=+0.079368541 container create 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:09:45 np0005592157 systemd[1]: Started libpod-conmon-396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72.scope.
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:44.932980163 +0000 UTC m=+0.049281270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5842eb9fb97dfc2b02d433658c1c51c2002c713e776fa83f33b22f05d7c9e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5842eb9fb97dfc2b02d433658c1c51c2002c713e776fa83f33b22f05d7c9e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5842eb9fb97dfc2b02d433658c1c51c2002c713e776fa83f33b22f05d7c9e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5842eb9fb97dfc2b02d433658c1c51c2002c713e776fa83f33b22f05d7c9e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:45.071617441 +0000 UTC m=+0.187918498 container init 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:45.086426741 +0000 UTC m=+0.202727798 container start 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:45.091874507 +0000 UTC m=+0.208175614 container attach 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:09:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:45 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:45 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]: {
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:    "0": [
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:        {
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "devices": [
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "/dev/loop3"
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            ],
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "lv_name": "ceph_lv0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "lv_size": "7511998464",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "name": "ceph_lv0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "tags": {
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.cluster_name": "ceph",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.crush_device_class": "",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.encrypted": "0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.osd_id": "0",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.type": "block",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:                "ceph.vdo": "0"
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            },
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "type": "block",
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:            "vg_name": "ceph_vg0"
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:        }
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]:    ]
Jan 22 09:09:45 np0005592157 sweet_meitner[263826]: }
Jan 22 09:09:45 np0005592157 systemd[1]: libpod-396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72.scope: Deactivated successfully.
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:45.89978954 +0000 UTC m=+1.016090607 container died 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:09:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8a5842eb9fb97dfc2b02d433658c1c51c2002c713e776fa83f33b22f05d7c9e3-merged.mount: Deactivated successfully.
Jan 22 09:09:45 np0005592157 podman[263809]: 2026-01-22 14:09:45.96272015 +0000 UTC m=+1.079021177 container remove 396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:09:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:45 np0005592157 systemd[1]: libpod-conmon-396500485de5286ec8afcab68cdd56a7238e47253e1126d626be576797cf1b72.scope: Deactivated successfully.
Jan 22 09:09:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.733571858 +0000 UTC m=+0.060091890 container create 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:46 np0005592157 systemd[1]: Started libpod-conmon-97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949.scope.
Jan 22 09:09:46 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.709007126 +0000 UTC m=+0.035527218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.819731958 +0000 UTC m=+0.146251980 container init 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.826301912 +0000 UTC m=+0.152821934 container start 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.830905466 +0000 UTC m=+0.157425558 container attach 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:09:46 np0005592157 optimistic_agnesi[264008]: 167 167
Jan 22 09:09:46 np0005592157 systemd[1]: libpod-97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949.scope: Deactivated successfully.
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.832620769 +0000 UTC m=+0.159140761 container died 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3ebef7e11fe77a150c188266b425c94eb8f3b4c6e79b1346bb0c131611fcb40c-merged.mount: Deactivated successfully.
Jan 22 09:09:46 np0005592157 podman[263991]: 2026-01-22 14:09:46.880696368 +0000 UTC m=+0.207216370 container remove 97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:09:46 np0005592157 systemd[1]: libpod-conmon-97c1e1d47a077877c0ef6aa4240af0c308f6e747d0cf8c020d2c52de1f947949.scope: Deactivated successfully.
Jan 22 09:09:47 np0005592157 podman[264033]: 2026-01-22 14:09:47.059047506 +0000 UTC m=+0.042218344 container create cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:09:47 np0005592157 systemd[1]: Started libpod-conmon-cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b.scope.
Jan 22 09:09:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:09:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6a6dd9c304c1a89dabba6af0badfa00c8bcc94829abaea923bc94e4c18a9e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:47 np0005592157 podman[264033]: 2026-01-22 14:09:47.042488763 +0000 UTC m=+0.025659631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:09:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6a6dd9c304c1a89dabba6af0badfa00c8bcc94829abaea923bc94e4c18a9e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6a6dd9c304c1a89dabba6af0badfa00c8bcc94829abaea923bc94e4c18a9e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6a6dd9c304c1a89dabba6af0badfa00c8bcc94829abaea923bc94e4c18a9e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:09:47 np0005592157 podman[264033]: 2026-01-22 14:09:47.150957239 +0000 UTC m=+0.134128147 container init cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:09:47 np0005592157 podman[264033]: 2026-01-22 14:09:47.163052611 +0000 UTC m=+0.146223469 container start cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:09:47 np0005592157 podman[264033]: 2026-01-22 14:09:47.295213798 +0000 UTC m=+0.278384656 container attach cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:09:47
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:09:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:09:47.578 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:09:47.579 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:09:47.579 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:47.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]: {
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:        "osd_id": 0,
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:        "type": "bluestore"
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]:    }
Jan 22 09:09:48 np0005592157 vibrant_heyrovsky[264050]: }
Jan 22 09:09:48 np0005592157 systemd[1]: libpod-cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b.scope: Deactivated successfully.
Jan 22 09:09:48 np0005592157 podman[264033]: 2026-01-22 14:09:48.125469488 +0000 UTC m=+1.108640426 container died cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:09:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ff6a6dd9c304c1a89dabba6af0badfa00c8bcc94829abaea923bc94e4c18a9e7-merged.mount: Deactivated successfully.
Jan 22 09:09:48 np0005592157 podman[264033]: 2026-01-22 14:09:48.193392952 +0000 UTC m=+1.176563780 container remove cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:09:48 np0005592157 systemd[1]: libpod-conmon-cc08087815a2b07efd01ff6a113002ca28072bac9ea16e73fab828ae93c20e2b.scope: Deactivated successfully.
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:09:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:48.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2b235d14-af4f-4fe0-971e-4ca0382755c3 does not exist
Jan 22 09:09:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 87ef03a4-0275-442d-bad9-8ce8ed45258c does not exist
Jan 22 09:09:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 75e6bcfd-39b8-4b81-8ff0-29e9a9d5c501 does not exist
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:49 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:49 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:49.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:50 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:09:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:51.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:09:52 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:53 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:53.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:54 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:55 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:55.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:56.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:56 np0005592157 podman[264190]: 2026-01-22 14:09:56.344299562 +0000 UTC m=+0.076381476 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:09:56 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:09:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:57.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:09:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:09:59 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:09:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:59.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:01 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:01 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:01 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:02.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:03 np0005592157 podman[264212]: 2026-01-22 14:10:03.422719982 +0000 UTC m=+0.122486186 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 09:10:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:03 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:03 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017680997580495068 of space, bias 1.0, pg target 0.5304299274148521 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:10:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:05 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:05.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:06 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:07 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.048 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.049 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.103 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.209 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.210 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.223 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.223 245711 INFO nova.compute.claims [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:10:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.429 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:10 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:10 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089952682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.870 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.879 245711 DEBUG nova.compute.provider_tree [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.913 245711 DEBUG nova.scheduler.client.report [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.944 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:10 np0005592157 nova_compute[245707]: 2026-01-22 14:10:10.945 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.037 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.038 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.081 245711 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.106 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.212 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.214 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.215 245711 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Creating image(s)#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.262 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.311 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.347 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.352 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.427 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.429 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.430 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.430 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.471 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.477 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.815 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592157 nova_compute[245707]: 2026-01-22 14:10:11.946 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] resizing rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:10:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:11.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.065 245711 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'migration_context' on Instance uuid 37c19c36-0359-4d64-a1c8-2ed3def24e7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:12 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.237 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.238 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Ensure instance console log exists: /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.238 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.238 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.239 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:12 np0005592157 nova_compute[245707]: 2026-01-22 14:10:12.242 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Automatically allocating a network for project e6c399bf43074b81b45ca1d976cb2b18. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 22 09:10:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:12.851 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:12.852 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:10:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:13 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:14.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:15 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:15 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:15.854 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:16.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:16.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:18.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:10:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:10:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:10:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:10:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:18.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:20.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:20 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:22.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:22.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:25 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 09:10:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:26.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:26.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592157 podman[264492]: 2026-01-22 14:10:27.350102203 +0000 UTC m=+0.075894624 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:10:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:27 np0005592157 nova_compute[245707]: 2026-01-22 14:10:27.544 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Automatically allocated network: {'id': '18c81f01-33be-49a1-a179-aecc87794f99', 'name': 'auto_allocated_network', 'tenant_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['41485253-d693-4726-824d-ace746b534e1', '9c3d77fd-5c90-4745-9c8a-c335ad8bf441'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-22T14:10:12Z', 'updated_at': '2026-01-22T14:10:26Z', 'revision_number': 4, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 22 09:10:27 np0005592157 nova_compute[245707]: 2026-01-22 14:10:27.554 245711 WARNING oslo_policy.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 22 09:10:27 np0005592157 nova_compute[245707]: 2026-01-22 14:10:27.554 245711 WARNING oslo_policy.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 22 09:10:27 np0005592157 nova_compute[245707]: 2026-01-22 14:10:27.557 245711 DEBUG nova.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd58a5335a8745f1b3ce1bd9a0439003', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:10:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:28.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:28.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:29 np0005592157 nova_compute[245707]: 2026-01-22 14:10:29.171 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Successfully created port: 90d96c34-0f6a-46af-8bb7-b253ca521620 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:10:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:30.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:30 np0005592157 nova_compute[245707]: 2026-01-22 14:10:30.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:30 np0005592157 nova_compute[245707]: 2026-01-22 14:10:30.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:10:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:30.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592157 nova_compute[245707]: 2026-01-22 14:10:31.329 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:31 np0005592157 nova_compute[245707]: 2026-01-22 14:10:31.330 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:10:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.296 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Successfully updated port: 90d96c34-0f6a-46af-8bb7-b253ca521620 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.321 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.321 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquired lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.321 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:10:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.694 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.903 245711 DEBUG nova.compute.manager [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-changed-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.903 245711 DEBUG nova.compute.manager [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Refreshing instance network info cache due to event network-changed-90d96c34-0f6a-46af-8bb7-b253ca521620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:10:32 np0005592157 nova_compute[245707]: 2026-01-22 14:10:32.904 245711 DEBUG oslo_concurrency.lockutils [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:34.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:34 np0005592157 podman[264567]: 2026-01-22 14:10:34.374528834 +0000 UTC m=+0.098013286 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:10:34 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.682 245711 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Updating instance_info_cache with network_info: [{"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.730 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Releasing lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.730 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Instance network_info: |[{"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.731 245711 DEBUG oslo_concurrency.lockutils [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.731 245711 DEBUG nova.network.neutron [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Refreshing network info cache for port 90d96c34-0f6a-46af-8bb7-b253ca521620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.736 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Start _get_guest_xml network_info=[{"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.742 245711 WARNING nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.750 245711 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.751 245711 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.755 245711 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.756 245711 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.759 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.759 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.760 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.760 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.760 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.761 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.761 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.761 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.762 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.762 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.762 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.763 245711 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.767 245711 DEBUG nova.privsep.utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 22 09:10:34 np0005592157 nova_compute[245707]: 2026-01-22 14:10:34.768 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327466574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.216 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.247 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.253 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.272 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.273 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.469 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.470 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.471 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.471 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.471 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970741415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.710 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.712 245711 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-3',id=7,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=37c19c36-0359-4d64-a1c8-2ed3def24e7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.713 245711 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.714 245711 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.716 245711 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37c19c36-0359-4d64-a1c8-2ed3def24e7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.758 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <uuid>37c19c36-0359-4d64-a1c8-2ed3def24e7e</uuid>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <name>instance-00000007</name>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <memory>131072</memory>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <vcpu>1</vcpu>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <metadata>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:name>tempest-tempest.common.compute-instance-811251323-3</nova:name>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:creationTime>2026-01-22 14:10:34</nova:creationTime>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:flavor name="m1.nano">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:memory>128</nova:memory>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:disk>1</nova:disk>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:swap>0</nova:swap>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </nova:flavor>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:owner>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:user uuid="fd58a5335a8745f1b3ce1bd9a0439003">tempest-AutoAllocateNetworkTest-687426125-project-member</nova:user>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:project uuid="e6c399bf43074b81b45ca1d976cb2b18">tempest-AutoAllocateNetworkTest-687426125</nova:project>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </nova:owner>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <nova:ports>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <nova:port uuid="90d96c34-0f6a-46af-8bb7-b253ca521620">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:          <nova:ip type="fixed" address="10.1.0.31" ipVersion="4"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:          <nova:ip type="fixed" address="fdfe:381f:8400::304" ipVersion="6"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        </nova:port>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </nova:ports>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </nova:instance>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </metadata>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <sysinfo type="smbios">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <system>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="serial">37c19c36-0359-4d64-a1c8-2ed3def24e7e</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="uuid">37c19c36-0359-4d64-a1c8-2ed3def24e7e</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </system>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </sysinfo>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <os>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <boot dev="hd"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <smbios mode="sysinfo"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </os>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <features>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <acpi/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <apic/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <vmcoreinfo/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </features>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <clock offset="utc">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <timer name="hpet" present="no"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </clock>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <cpu mode="custom" match="exact">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <model>Nehalem</model>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  <devices>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <disk type="network" device="disk">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <target dev="vda" bus="virtio"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <disk type="network" device="cdrom">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <target dev="sda" bus="sata"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <interface type="ethernet">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <mac address="fa:16:3e:a5:1b:de"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <mtu size="1442"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <target dev="tap90d96c34-0f"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </interface>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <serial type="pty">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <log file="/var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/console.log" append="off"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </serial>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <video>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </video>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <input type="tablet" bus="usb"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <rng model="virtio">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </rng>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <controller type="usb" index="0"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    <memballoon model="virtio">
Jan 22 09:10:35 np0005592157 nova_compute[245707]:      <stats period="10"/>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:    </memballoon>
Jan 22 09:10:35 np0005592157 nova_compute[245707]:  </devices>
Jan 22 09:10:35 np0005592157 nova_compute[245707]: </domain>
Jan 22 09:10:35 np0005592157 nova_compute[245707]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.759 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Preparing to wait for external event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.760 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.760 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.760 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.761 245711 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-3',id=7,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=37c19c36-0359-4d64-a1c8-2ed3def24e7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.761 245711 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.762 245711 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.763 245711 DEBUG os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.854 245711 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.854 245711 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.854 245711 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.855 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.855 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLOUT] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.855 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.856 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.857 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.860 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.869 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.870 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.870 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.872 245711 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp_4x_nmeq/privsep.sock']#033[00m
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036072568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:35 np0005592157 nova_compute[245707]: 2026-01-22 14:10:35.905 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.101 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.102 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=20.888916015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.102 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.102 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:36.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.354 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.354 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.355 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 37c19c36-0359-4d64-a1c8-2ed3def24e7e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.355 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.355 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.471 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing inventories for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:10:36 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.562 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating ProviderTree inventory for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.562 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating inventory in ProviderTree for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.590 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing aggregate associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.622 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing trait associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.693 245711 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.476 264683 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.481 264683 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.483 264683 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.483 264683 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264683#033[00m
Jan 22 09:10:36 np0005592157 nova_compute[245707]: 2026-01-22 14:10:36.726 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.048 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.049 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90d96c34-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.050 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90d96c34-0f, col_values=(('external_ids', {'iface-id': '90d96c34-0f6a-46af-8bb7-b253ca521620', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:1b:de', 'vm-uuid': '37c19c36-0359-4d64-a1c8-2ed3def24e7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:37 np0005592157 NetworkManager[48997]: <info>  [1769091037.1183] manager: (tap90d96c34-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.119 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.122 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.126 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.127 245711 INFO os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f')#033[00m
Jan 22 09:10:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187506382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.204 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.209 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.235 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.250 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.251 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.251 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No VIF found with MAC fa:16:3e:a5:1b:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.252 245711 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Using config drive#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.281 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.290 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.290 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.981 245711 DEBUG nova.network.neutron [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Updated VIF entry in instance network info cache for port 90d96c34-0f6a-46af-8bb7-b253ca521620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:10:37 np0005592157 nova_compute[245707]: 2026-01-22 14:10:37.982 245711 DEBUG nova.network.neutron [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Updating instance_info_cache with network_info: [{"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.029 245711 DEBUG oslo_concurrency.lockutils [req-0b39fdb9-9ad5-44bb-9e52-64d7ffc7b3d8 req-5eb334c9-1543-4f1e-9f39-5dd93f47d87d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-37c19c36-0359-4d64-a1c8-2ed3def24e7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:38.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.385 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.618 245711 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Creating config drive at /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.624 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofosyk38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.756 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofosyk38" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.798 245711 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:38 np0005592157 nova_compute[245707]: 2026-01-22 14:10:38.802 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.262 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.264 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.264 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.264 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:10:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.313 245711 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config 37c19c36-0359-4d64-a1c8-2ed3def24e7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.314 245711 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Deleting local config drive /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e/disk.config because it was imported into RBD.#033[00m
Jan 22 09:10:39 np0005592157 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:10:39 np0005592157 systemd[1]: Started libvirt secret daemon.
Jan 22 09:10:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:10:39 np0005592157 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 22 09:10:39 np0005592157 kernel: tap90d96c34-0f: entered promiscuous mode
Jan 22 09:10:39 np0005592157 NetworkManager[48997]: <info>  [1769091039.4520] manager: (tap90d96c34-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 22 09:10:39 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:39Z|00027|binding|INFO|Claiming lport 90d96c34-0f6a-46af-8bb7-b253ca521620 for this chassis.
Jan 22 09:10:39 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:39Z|00028|binding|INFO|90d96c34-0f6a-46af-8bb7-b253ca521620: Claiming fa:16:3e:a5:1b:de 10.1.0.31 fdfe:381f:8400::304
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.458 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.463 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:39 np0005592157 systemd-udevd[264805]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:10:39 np0005592157 NetworkManager[48997]: <info>  [1769091039.5063] device (tap90d96c34-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:10:39 np0005592157 NetworkManager[48997]: <info>  [1769091039.5073] device (tap90d96c34-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:10:39 np0005592157 systemd-machined[211644]: New machine qemu-1-instance-00000007.
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.534 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:39 np0005592157 systemd[1]: Started Virtual Machine qemu-1-instance-00000007.
Jan 22 09:10:39 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:39Z|00029|binding|INFO|Setting lport 90d96c34-0f6a-46af-8bb7-b253ca521620 ovn-installed in OVS
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.543 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:39 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:39Z|00030|binding|INFO|Setting lport 90d96c34-0f6a-46af-8bb7-b253ca521620 up in Southbound
Jan 22 09:10:39 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:39.687 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:1b:de 10.1.0.31 fdfe:381f:8400::304'], port_security=['fa:16:3e:a5:1b:de 10.1.0.31 fdfe:381f:8400::304'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.31/26 fdfe:381f:8400::304/64', 'neutron:device_id': '37c19c36-0359-4d64-a1c8-2ed3def24e7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=90d96c34-0f6a-46af-8bb7-b253ca521620) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:39 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:39.689 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 90d96c34-0f6a-46af-8bb7-b253ca521620 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 bound to our chassis#033[00m
Jan 22 09:10:39 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:39.693 157426 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99#033[00m
Jan 22 09:10:39 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:39.696 157426 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpe6eymxmj/privsep.sock']#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.700 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.701 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.701 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.701 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.701 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:39 np0005592157 nova_compute[245707]: 2026-01-22 14:10:39.702 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:40.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.244 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091040.243189, 37c19c36-0359-4d64-a1c8-2ed3def24e7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.245 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] VM Started (Lifecycle Event)#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.248 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.308 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.313 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091040.243385, 37c19c36-0359-4d64-a1c8-2ed3def24e7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.313 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:10:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.345 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.352 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.426 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.549 157426 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.550 157426 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpe6eymxmj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.327 264865 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.335 264865 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.340 264865 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.340 264865 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264865#033[00m
Jan 22 09:10:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:40.553 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[b505f3e7-017e-48d8-92dc-dca40f81d4aa]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.588 245711 DEBUG nova.compute.manager [req-79f21592-2a3f-4071-b374-ff449b87c27f req-ac80eb58-4e97-44c2-a9c4-c86fff42e5ca 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.588 245711 DEBUG oslo_concurrency.lockutils [req-79f21592-2a3f-4071-b374-ff449b87c27f req-ac80eb58-4e97-44c2-a9c4-c86fff42e5ca 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.589 245711 DEBUG oslo_concurrency.lockutils [req-79f21592-2a3f-4071-b374-ff449b87c27f req-ac80eb58-4e97-44c2-a9c4-c86fff42e5ca 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.589 245711 DEBUG oslo_concurrency.lockutils [req-79f21592-2a3f-4071-b374-ff449b87c27f req-ac80eb58-4e97-44c2-a9c4-c86fff42e5ca 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.589 245711 DEBUG nova.compute.manager [req-79f21592-2a3f-4071-b374-ff449b87c27f req-ac80eb58-4e97-44c2-a9c4-c86fff42e5ca 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Processing event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.590 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.594 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.598 245711 INFO nova.virt.libvirt.driver [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Instance spawned successfully.#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.599 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.604 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091040.6037028, 37c19c36-0359-4d64-a1c8-2ed3def24e7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.604 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.640 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.643 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.644 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.644 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.644 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.645 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.645 245711 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.649 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.708 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.723 245711 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Took 29.51 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.724 245711 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.842 245711 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Took 30.68 seconds to build instance.#033[00m
Jan 22 09:10:40 np0005592157 nova_compute[245707]: 2026-01-22 14:10:40.868 245711 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.160 264865 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.160 264865 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.160 264865 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:41 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:41 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.828 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[815ad922-0e2c-4c80-8ddc-35c99546b42d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.829 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18c81f01-31 in ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.832 264865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18c81f01-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.832 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[0f93b3ee-4e33-47fc-b8a1-1f9382a8e339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.835 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[425f6a01-92ef-4e09-924c-ba604ad59588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.862 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[5705dd14-a763-4f24-9ade-cd9a7ad1d1b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.883 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bba939-c129-4f92-ae7f-1b79c95061a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:41.886 157426 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpkc7l3jau/privsep.sock']#033[00m
Jan 22 09:10:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.162 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:10:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:42 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.421 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.731 157426 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.734 157426 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkc7l3jau/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.512 264880 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.517 264880 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.520 264880 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.520 264880 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264880#033[00m
Jan 22 09:10:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:42.739 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2ee859-cc13-4716-913f-4518d54423e1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.772 245711 DEBUG nova.compute.manager [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.772 245711 DEBUG oslo_concurrency.lockutils [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.773 245711 DEBUG oslo_concurrency.lockutils [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.773 245711 DEBUG oslo_concurrency.lockutils [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.773 245711 DEBUG nova.compute.manager [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] No waiting events found dispatching network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:10:42 np0005592157 nova_compute[245707]: 2026-01-22 14:10:42.773 245711 WARNING nova.compute.manager [req-be41c45e-91b1-495b-924d-1b6da9df494f req-c117686b-5507-4f99-aab2-bf5622c1443d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received unexpected event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 for instance with vm_state active and task_state None.#033[00m
Jan 22 09:10:43 np0005592157 nova_compute[245707]: 2026-01-22 14:10:43.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:43 np0005592157 nova_compute[245707]: 2026-01-22 14:10:43.387 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:43.437 264880 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:43.437 264880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:43.437 264880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 09:10:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:44.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.072 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5909c2-21f8-4c29-869e-40e3e8659b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.102 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[7807e36c-e7ed-4629-aefd-45fc1404ae83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 NetworkManager[48997]: <info>  [1769091044.1039] manager: (tap18c81f01-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 22 09:10:44 np0005592157 systemd-udevd[264892]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.147 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddc0a50-c1cf-40d0-96b8-1772ec644dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.150 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[0481d447-ff7b-4ad5-8925-2e2ce0cb3819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 NetworkManager[48997]: <info>  [1769091044.1825] device (tap18c81f01-30): carrier: link connected
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.188 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9c1365-a55f-4f1e-b5c4-44d9464dde38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.216 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[37e5a95f-bb7a-438a-9f08-42524a4c0e44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483912, 'reachable_time': 17156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264911, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.243 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[ec13280f-92e6-4c97-a0b8-c22adc6bcf1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:9efc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483912, 'tstamp': 483912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264912, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.269 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[b3901bb2-7595-40b5-baf6-93d2b7886072]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483912, 'reachable_time': 17156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264913, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.322 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[1292ce90-a3f3-443c-9375-8d21a9736e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:44.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.410 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[785a696a-5a66-4bab-bfce-1b7b2b59f27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.412 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.413 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.413 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:44 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:44 np0005592157 NetworkManager[48997]: <info>  [1769091044.4631] manager: (tap18c81f01-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 22 09:10:44 np0005592157 kernel: tap18c81f01-30: entered promiscuous mode
Jan 22 09:10:44 np0005592157 nova_compute[245707]: 2026-01-22 14:10:44.462 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:44 np0005592157 nova_compute[245707]: 2026-01-22 14:10:44.464 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.472 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:44 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:44Z|00031|binding|INFO|Releasing lport 27625ef7-8ad4-4498-ac70-a911e819f701 from this chassis (sb_readonly=0)
Jan 22 09:10:44 np0005592157 nova_compute[245707]: 2026-01-22 14:10:44.480 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.483 157426 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.485 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1ad9e9-1d3e-4186-890e-cdf7fa9a2c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.488 157426 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: global
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    log         /dev/log local0 debug
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    log-tag     haproxy-metadata-proxy-18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    user        root
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    group       root
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    maxconn     1024
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    pidfile     /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    daemon
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: defaults
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    log global
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    mode http
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    option httplog
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    option dontlognull
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    option http-server-close
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    option forwardfor
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    retries                 3
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    timeout http-request    30s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    timeout connect         30s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    timeout client          32s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    timeout server          32s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    timeout http-keep-alive 30s
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: listen listener
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    bind 169.254.169.254:80
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]:    http-request add-header X-OVN-Network-ID 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:10:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:44.491 157426 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'env', 'PROCESS_TAG=haproxy-18c81f01-33be-49a1-a179-aecc87794f99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18c81f01-33be-49a1-a179-aecc87794f99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:10:44 np0005592157 nova_compute[245707]: 2026-01-22 14:10:44.496 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:44 np0005592157 podman[264946]: 2026-01-22 14:10:44.94303414 +0000 UTC m=+0.075065004 container create 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 09:10:44 np0005592157 podman[264946]: 2026-01-22 14:10:44.900579751 +0000 UTC m=+0.032610645 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:10:45 np0005592157 systemd[1]: Started libpod-conmon-50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227.scope.
Jan 22 09:10:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0936b0b02cf8c630e6fd0641d909ad8f8159fddd6ab3f07c983173b1dfdcce1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:45 np0005592157 podman[264946]: 2026-01-22 14:10:45.074434197 +0000 UTC m=+0.206465091 container init 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:10:45 np0005592157 podman[264946]: 2026-01-22 14:10:45.08335368 +0000 UTC m=+0.215384544 container start 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:10:45 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [NOTICE]   (264966) : New worker (264968) forked
Jan 22 09:10:45 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [NOTICE]   (264966) : Loading success.
Jan 22 09:10:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:10:45 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:45 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:46.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:10:46 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592157 nova_compute[245707]: 2026-01-22 14:10:47.213 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:10:47
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'vms']
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:10:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:47.579 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:47.580 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:10:47.581 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:48.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:48 np0005592157 nova_compute[245707]: 2026-01-22 14:10:48.389 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:48 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2c19c9f3-7810-4b1d-8ad0-232f53a76a28 does not exist
Jan 22 09:10:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e24dc98e-34f9-4e7f-a039-b7a79cab319b does not exist
Jan 22 09:10:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6ee70680-878b-4a21-8835-0f7b6f32e14c does not exist
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:10:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:10:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:10:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:50.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.808358438 +0000 UTC m=+0.077024882 container create 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:10:50 np0005592157 systemd[1]: Started libpod-conmon-79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192.scope.
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.78037241 +0000 UTC m=+0.049038864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.928280499 +0000 UTC m=+0.196946943 container init 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.944031022 +0000 UTC m=+0.212697436 container start 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.950206896 +0000 UTC m=+0.218873420 container attach 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:10:50 np0005592157 funny_shirley[265320]: 167 167
Jan 22 09:10:50 np0005592157 systemd[1]: libpod-79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192.scope: Deactivated successfully.
Jan 22 09:10:50 np0005592157 podman[265304]: 2026-01-22 14:10:50.95877149 +0000 UTC m=+0.227437934 container died 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:10:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-be7672b868590215c3f2a653c1ece8f1b252138d36c896b14cd27eb1a0765b85-merged.mount: Deactivated successfully.
Jan 22 09:10:51 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:51 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:51 np0005592157 podman[265304]: 2026-01-22 14:10:51.018415888 +0000 UTC m=+0.287082292 container remove 79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:10:51 np0005592157 systemd[1]: libpod-conmon-79e3bf3bc9d17276b79c9fcdb1c074015e007d80bb4371516c274d4ebf121192.scope: Deactivated successfully.
Jan 22 09:10:51 np0005592157 podman[265345]: 2026-01-22 14:10:51.261960083 +0000 UTC m=+0.091155115 container create 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:10:51 np0005592157 podman[265345]: 2026-01-22 14:10:51.219623767 +0000 UTC m=+0.048818839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:51 np0005592157 systemd[1]: Started libpod-conmon-5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442.scope.
Jan 22 09:10:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:51 np0005592157 podman[265345]: 2026-01-22 14:10:51.386179442 +0000 UTC m=+0.215374484 container init 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:10:51 np0005592157 podman[265345]: 2026-01-22 14:10:51.400226792 +0000 UTC m=+0.229421804 container start 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:10:51 np0005592157 podman[265345]: 2026-01-22 14:10:51.404627971 +0000 UTC m=+0.233823013 container attach 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:10:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:10:52 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:10:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:52.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:10:52 np0005592157 nova_compute[245707]: 2026-01-22 14:10:52.255 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:52 np0005592157 recursing_thompson[265361]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:10:52 np0005592157 recursing_thompson[265361]: --> relative data size: 1.0
Jan 22 09:10:52 np0005592157 recursing_thompson[265361]: --> All data devices are unavailable
Jan 22 09:10:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:52.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:52 np0005592157 systemd[1]: libpod-5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442.scope: Deactivated successfully.
Jan 22 09:10:52 np0005592157 podman[265345]: 2026-01-22 14:10:52.386334159 +0000 UTC m=+1.215529161 container died 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:10:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7b4546bde82825a4879f2d74ca4b1dcfeab726631e76d86a45b78c172ff040b4-merged.mount: Deactivated successfully.
Jan 22 09:10:52 np0005592157 podman[265345]: 2026-01-22 14:10:52.479397571 +0000 UTC m=+1.308592573 container remove 5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:10:52 np0005592157 systemd[1]: libpod-conmon-5950707541f212c67924d1a4e9304cc7bae9739a43763c8e24ae01df95a07442.scope: Deactivated successfully.
Jan 22 09:10:53 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.254908326 +0000 UTC m=+0.045600279 container create b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:10:53 np0005592157 systemd[1]: Started libpod-conmon-b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c.scope.
Jan 22 09:10:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.235969753 +0000 UTC m=+0.026661726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.348377267 +0000 UTC m=+0.139069240 container init b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.357104485 +0000 UTC m=+0.147796428 container start b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.36132591 +0000 UTC m=+0.152017883 container attach b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:10:53 np0005592157 awesome_knuth[265545]: 167 167
Jan 22 09:10:53 np0005592157 systemd[1]: libpod-b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c.scope: Deactivated successfully.
Jan 22 09:10:53 np0005592157 conmon[265545]: conmon b1f791f0c6607b8d3d00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c.scope/container/memory.events
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.365827013 +0000 UTC m=+0.156518966 container died b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:10:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-70d6238bbb518cd8b51076fd6e6392664c454bfed50efdb7ef57873d1a4b396a-merged.mount: Deactivated successfully.
Jan 22 09:10:53 np0005592157 podman[265529]: 2026-01-22 14:10:53.404804725 +0000 UTC m=+0.195496668 container remove b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:10:53 np0005592157 nova_compute[245707]: 2026-01-22 14:10:53.408 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:53 np0005592157 systemd[1]: libpod-conmon-b1f791f0c6607b8d3d00bcd4f4334aa28f1749b7a4a8ced02c8ebba3da7cab4c.scope: Deactivated successfully.
Jan 22 09:10:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 22 09:10:53 np0005592157 podman[265571]: 2026-01-22 14:10:53.679729893 +0000 UTC m=+0.092707974 container create 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:10:53 np0005592157 podman[265571]: 2026-01-22 14:10:53.635042188 +0000 UTC m=+0.048020319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:53 np0005592157 systemd[1]: Started libpod-conmon-14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9.scope.
Jan 22 09:10:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769728792d3c83fc2c0e94cf60e46562615ea518f907e64f5b25b7cc10ed0347/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769728792d3c83fc2c0e94cf60e46562615ea518f907e64f5b25b7cc10ed0347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769728792d3c83fc2c0e94cf60e46562615ea518f907e64f5b25b7cc10ed0347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769728792d3c83fc2c0e94cf60e46562615ea518f907e64f5b25b7cc10ed0347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:53 np0005592157 podman[265571]: 2026-01-22 14:10:53.866331858 +0000 UTC m=+0.279309939 container init 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:10:53 np0005592157 podman[265571]: 2026-01-22 14:10:53.874549523 +0000 UTC m=+0.287527574 container start 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:10:53 np0005592157 podman[265571]: 2026-01-22 14:10:53.878807419 +0000 UTC m=+0.291785510 container attach 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:10:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:54 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:54.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]: {
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:    "0": [
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:        {
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "devices": [
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "/dev/loop3"
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            ],
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "lv_name": "ceph_lv0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "lv_size": "7511998464",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "name": "ceph_lv0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "tags": {
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.cluster_name": "ceph",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.crush_device_class": "",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.encrypted": "0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.osd_id": "0",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.type": "block",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:                "ceph.vdo": "0"
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            },
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "type": "block",
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:            "vg_name": "ceph_vg0"
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:        }
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]:    ]
Jan 22 09:10:54 np0005592157 keen_chebyshev[265587]: }
Jan 22 09:10:54 np0005592157 systemd[1]: libpod-14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9.scope: Deactivated successfully.
Jan 22 09:10:54 np0005592157 podman[265571]: 2026-01-22 14:10:54.804160962 +0000 UTC m=+1.217139003 container died 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:10:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-769728792d3c83fc2c0e94cf60e46562615ea518f907e64f5b25b7cc10ed0347-merged.mount: Deactivated successfully.
Jan 22 09:10:54 np0005592157 podman[265571]: 2026-01-22 14:10:54.870612459 +0000 UTC m=+1.283590510 container remove 14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:10:54 np0005592157 systemd[1]: libpod-conmon-14aa9d21d916fe582ac295969c9cbbb3ed40f255f9bde27b5017113ab82a5bb9.scope: Deactivated successfully.
Jan 22 09:10:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.63596731 +0000 UTC m=+0.060334516 container create 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:10:55 np0005592157 systemd[1]: Started libpod-conmon-11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e.scope.
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.613600672 +0000 UTC m=+0.037967898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.737312968 +0000 UTC m=+0.161680274 container init 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.744739863 +0000 UTC m=+0.169107109 container start 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.74941055 +0000 UTC m=+0.173777796 container attach 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:10:55 np0005592157 eloquent_robinson[265772]: 167 167
Jan 22 09:10:55 np0005592157 systemd[1]: libpod-11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e.scope: Deactivated successfully.
Jan 22 09:10:55 np0005592157 conmon[265772]: conmon 11d2366fc5ceb7ec542e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e.scope/container/memory.events
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.754016015 +0000 UTC m=+0.178383251 container died 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:10:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4f139d45231428cec8193f353eb25e5a532b148bdd688574dfd832d5161dcb84-merged.mount: Deactivated successfully.
Jan 22 09:10:55 np0005592157 podman[265755]: 2026-01-22 14:10:55.812992046 +0000 UTC m=+0.237359252 container remove 11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:10:55 np0005592157 systemd[1]: libpod-conmon-11d2366fc5ceb7ec542e6b8f672d9c2e4310c77a7d48d4df6a283174351b1a7e.scope: Deactivated successfully.
Jan 22 09:10:56 np0005592157 podman[265798]: 2026-01-22 14:10:56.026669356 +0000 UTC m=+0.042035160 container create 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:10:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:56.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:56 np0005592157 systemd[1]: Started libpod-conmon-1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c.scope.
Jan 22 09:10:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:10:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168b26d03f05576ee4c010121e915cdfd3dc7e09bc6a9472615a0d9d143e767c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168b26d03f05576ee4c010121e915cdfd3dc7e09bc6a9472615a0d9d143e767c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168b26d03f05576ee4c010121e915cdfd3dc7e09bc6a9472615a0d9d143e767c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168b26d03f05576ee4c010121e915cdfd3dc7e09bc6a9472615a0d9d143e767c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:56 np0005592157 podman[265798]: 2026-01-22 14:10:56.010503993 +0000 UTC m=+0.025869817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:10:56 np0005592157 podman[265798]: 2026-01-22 14:10:56.124180018 +0000 UTC m=+0.139545852 container init 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:10:56 np0005592157 podman[265798]: 2026-01-22 14:10:56.133389348 +0000 UTC m=+0.148755182 container start 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:10:56 np0005592157 podman[265798]: 2026-01-22 14:10:56.137679415 +0000 UTC m=+0.153045259 container attach 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:10:56 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:10:56 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:56 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:56Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:1b:de 10.1.0.31
Jan 22 09:10:56 np0005592157 ovn_controller[146940]: 2026-01-22T14:10:56Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:1b:de 10.1.0.31
Jan 22 09:10:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:56.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]: {
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:        "osd_id": 0,
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:        "type": "bluestore"
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]:    }
Jan 22 09:10:57 np0005592157 thirsty_euler[265815]: }
Jan 22 09:10:57 np0005592157 systemd[1]: libpod-1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c.scope: Deactivated successfully.
Jan 22 09:10:57 np0005592157 podman[265836]: 2026-01-22 14:10:57.17097471 +0000 UTC m=+0.038882890 container died 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:10:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-168b26d03f05576ee4c010121e915cdfd3dc7e09bc6a9472615a0d9d143e767c-merged.mount: Deactivated successfully.
Jan 22 09:10:57 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:57 np0005592157 podman[265836]: 2026-01-22 14:10:57.234777342 +0000 UTC m=+0.102685472 container remove 1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:10:57 np0005592157 systemd[1]: libpod-conmon-1ceae8577e26a2c552018098faee4a6f05055ecf6c9bce4bd532d3d39846217c.scope: Deactivated successfully.
Jan 22 09:10:57 np0005592157 nova_compute[245707]: 2026-01-22 14:10:57.288 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:10:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:10:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9c92e64b-f561-4868-8429-4137673e9461 does not exist
Jan 22 09:10:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aa26d03f-e73c-45d7-bfaf-788160480bf6 does not exist
Jan 22 09:10:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f4605ea4-87a8-496d-afac-5317964331ea does not exist
Jan 22 09:10:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Jan 22 09:10:57 np0005592157 podman[265874]: 2026-01-22 14:10:57.616272338 +0000 UTC m=+0.100814365 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 09:10:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:10:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:58.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:10:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:10:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:58.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:58 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:58 np0005592157 nova_compute[245707]: 2026-01-22 14:10:58.461 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 293 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 22 09:10:59 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:00.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:00.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:00 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 09:11:01 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:02.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:02 np0005592157 nova_compute[245707]: 2026-01-22 14:11:02.330 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:02.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:02 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 09:11:03 np0005592157 nova_compute[245707]: 2026-01-22 14:11:03.502 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:03 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005919898974036851 of space, bias 1.0, pg target 1.7759696922110553 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:11:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:11:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:04.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:04.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:04 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:05 np0005592157 podman[265923]: 2026-01-22 14:11:05.423385565 +0000 UTC m=+0.141810029 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:11:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 235 op/s
Jan 22 09:11:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:06.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:06.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:06 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:07 np0005592157 nova_compute[245707]: 2026-01-22 14:11:07.334 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 09:11:07 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:08.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:08 np0005592157 nova_compute[245707]: 2026-01-22 14:11:08.505 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 09:11:09 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:10.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:11:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:11:10 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:10 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Jan 22 09:11:11 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:12 np0005592157 nova_compute[245707]: 2026-01-22 14:11:12.337 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.335 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.336 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.337 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.337 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.337 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.341 245711 INFO nova.compute.manager [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Terminating instance#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.342 245711 DEBUG nova.compute.manager [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:11:13 np0005592157 kernel: tap90d96c34-0f (unregistering): left promiscuous mode
Jan 22 09:11:13 np0005592157 NetworkManager[48997]: <info>  [1769091073.4154] device (tap90d96c34-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.428 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 ovn_controller[146940]: 2026-01-22T14:11:13Z|00032|binding|INFO|Releasing lport 90d96c34-0f6a-46af-8bb7-b253ca521620 from this chassis (sb_readonly=0)
Jan 22 09:11:13 np0005592157 ovn_controller[146940]: 2026-01-22T14:11:13Z|00033|binding|INFO|Setting lport 90d96c34-0f6a-46af-8bb7-b253ca521620 down in Southbound
Jan 22 09:11:13 np0005592157 ovn_controller[146940]: 2026-01-22T14:11:13Z|00034|binding|INFO|Removing iface tap90d96c34-0f ovn-installed in OVS
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.430 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.450 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 144 KiB/s rd, 1.1 MiB/s wr, 59 op/s
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.466 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:1b:de 10.1.0.31 fdfe:381f:8400::304'], port_security=['fa:16:3e:a5:1b:de 10.1.0.31 fdfe:381f:8400::304'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.31/26 fdfe:381f:8400::304/64', 'neutron:device_id': '37c19c36-0359-4d64-a1c8-2ed3def24e7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=90d96c34-0f6a-46af-8bb7-b253ca521620) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.468 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 90d96c34-0f6a-46af-8bb7-b253ca521620 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.470 157426 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18c81f01-33be-49a1-a179-aecc87794f99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.472 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[14549f0b-6834-49ea-bd03-6a451d0ff857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.473 157426 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace which is not needed anymore#033[00m
Jan 22 09:11:13 np0005592157 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 22 09:11:13 np0005592157 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 16.747s CPU time.
Jan 22 09:11:13 np0005592157 systemd-machined[211644]: Machine qemu-1-instance-00000007 terminated.
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.506 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.591 245711 INFO nova.virt.libvirt.driver [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Instance destroyed successfully.#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.592 245711 DEBUG nova.objects.instance [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'resources' on Instance uuid 37c19c36-0359-4d64-a1c8-2ed3def24e7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.635 245711 DEBUG nova.virt.libvirt.vif [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-3',id=7,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-22T14:10:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:10:40Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=37c19c36-0359-4d64-a1c8-2ed3def24e7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.637 245711 DEBUG nova.network.os_vif_util [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "90d96c34-0f6a-46af-8bb7-b253ca521620", "address": "fa:16:3e:a5:1b:de", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.31", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::304", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d96c34-0f", "ovs_interfaceid": "90d96c34-0f6a-46af-8bb7-b253ca521620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.638 245711 DEBUG nova.network.os_vif_util [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.639 245711 DEBUG os_vif [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.642 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.643 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90d96c34-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.645 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.647 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [NOTICE]   (264966) : haproxy version is 2.8.14-c23fe91
Jan 22 09:11:13 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [NOTICE]   (264966) : path to executable is /usr/sbin/haproxy
Jan 22 09:11:13 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [WARNING]  (264966) : Exiting Master process...
Jan 22 09:11:13 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [ALERT]    (264966) : Current worker (264968) exited with code 143 (Terminated)
Jan 22 09:11:13 np0005592157 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[264962]: [WARNING]  (264966) : All workers exited. Exiting... (0)
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.656 245711 INFO os_vif [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:1b:de,bridge_name='br-int',has_traffic_filtering=True,id=90d96c34-0f6a-46af-8bb7-b253ca521620,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d96c34-0f')#033[00m
Jan 22 09:11:13 np0005592157 systemd[1]: libpod-50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227.scope: Deactivated successfully.
Jan 22 09:11:13 np0005592157 podman[266039]: 2026-01-22 14:11:13.661236496 +0000 UTC m=+0.062868039 container died 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:11:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227-userdata-shm.mount: Deactivated successfully.
Jan 22 09:11:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a0936b0b02cf8c630e6fd0641d909ad8f8159fddd6ab3f07c983173b1dfdcce1-merged.mount: Deactivated successfully.
Jan 22 09:11:13 np0005592157 podman[266039]: 2026-01-22 14:11:13.707073939 +0000 UTC m=+0.108705452 container cleanup 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:11:13 np0005592157 systemd[1]: libpod-conmon-50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227.scope: Deactivated successfully.
Jan 22 09:11:13 np0005592157 podman[266084]: 2026-01-22 14:11:13.784096641 +0000 UTC m=+0.052695026 container remove 50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.791 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[d1be2b3a-0307-4478-9bf0-db0374daaebf]: (4, ('Thu Jan 22 02:11:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227)\n50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227\nThu Jan 22 02:11:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227)\n50e71e4d1674375e6b625b0ae0087bd042b06aac559f66a69e099077a0577227\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.794 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[55f78ea0-60ab-4305-8906-143fae011ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.796 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.799 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 kernel: tap18c81f01-30: left promiscuous mode
Jan 22 09:11:13 np0005592157 nova_compute[245707]: 2026-01-22 14:11:13.812 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.817 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[10931527-a5d5-4ec4-a151-ff55d3cba766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.838 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[d27c39c5-41aa-4a6d-9a3a-cec6bf933106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.840 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[1584d245-b4ba-449c-938a-4d18776542ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.859 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[43728cbc-f160-4115-9d46-e370da43be31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483900, 'reachable_time': 30338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266103, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:13 np0005592157 systemd[1]: run-netns-ovnmeta\x2d18c81f01\x2d33be\x2d49a1\x2da179\x2daecc87794f99.mount: Deactivated successfully.
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.877 157842 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:13.879 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4fe36f-be04-4b93-808a-0745b87f1662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:14.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.153 245711 INFO nova.virt.libvirt.driver [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Deleting instance files /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e_del#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.154 245711 INFO nova.virt.libvirt.driver [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Deletion of /var/lib/nova/instances/37c19c36-0359-4d64-a1c8-2ed3def24e7e_del complete#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.253 245711 DEBUG nova.virt.libvirt.host [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.254 245711 INFO nova.virt.libvirt.host [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] UEFI support detected#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.258 245711 INFO nova.compute.manager [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.258 245711 DEBUG oslo.service.loopingcall [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.259 245711 DEBUG nova.compute.manager [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.259 245711 DEBUG nova.network.neutron [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:11:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:14.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.806 245711 DEBUG nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-unplugged-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.807 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.807 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.808 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.808 245711 DEBUG nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] No waiting events found dispatching network-vif-unplugged-90d96c34-0f6a-46af-8bb7-b253ca521620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.809 245711 DEBUG nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-unplugged-90d96c34-0f6a-46af-8bb7-b253ca521620 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.809 245711 DEBUG nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.810 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.810 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.810 245711 DEBUG oslo_concurrency.lockutils [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.811 245711 DEBUG nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] No waiting events found dispatching network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:14 np0005592157 nova_compute[245707]: 2026-01-22 14:11:14.811 245711 WARNING nova.compute.manager [req-c5f73dde-9dfa-423a-9980-cfd1878a31a8 req-3f2dd8a1-8612-4b40-8e2b-2d4d24e018f2 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received unexpected event network-vif-plugged-90d96c34-0f6a-46af-8bb7-b253ca521620 for instance with vm_state active and task_state deleting.#033[00m
Jan 22 09:11:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 170 KiB/s rd, 1.7 MiB/s wr, 99 op/s
Jan 22 09:11:15 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:15.587 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.587 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:15 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:15.589 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.661 245711 DEBUG nova.network.neutron [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.684 245711 INFO nova.compute.manager [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Took 1.42 seconds to deallocate network for instance.#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.765 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.766 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:15 np0005592157 nova_compute[245707]: 2026-01-22 14:11:15.889 245711 DEBUG oslo_concurrency.processutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.039 245711 DEBUG nova.compute.manager [req-d4eba262-677f-4ac3-b45a-9990c0bcef22 req-a9228165-b181-499e-8563-3db90a23e88c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Received event network-vif-deleted-90d96c34-0f6a-46af-8bb7-b253ca521620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231319645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.365 245711 DEBUG oslo_concurrency.processutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.374 245711 DEBUG nova.compute.provider_tree [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:16.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.435 245711 DEBUG nova.scheduler.client.report [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.470 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.539 245711 INFO nova.scheduler.client.report [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Deleted allocations for instance 37c19c36-0359-4d64-a1c8-2ed3def24e7e#033[00m
Jan 22 09:11:16 np0005592157 nova_compute[245707]: 2026-01-22 14:11:16.717 245711 DEBUG oslo_concurrency.lockutils [None req-97baaf40-09c5-44fb-87c7-b7432faed9f3 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "37c19c36-0359-4d64-a1c8-2ed3def24e7e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.6 MiB/s wr, 69 op/s
Jan 22 09:11:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:18.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:11:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:11:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:11:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:11:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:18.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:18 np0005592157 nova_compute[245707]: 2026-01-22 14:11:18.527 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:18 np0005592157 nova_compute[245707]: 2026-01-22 14:11:18.646 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 09:11:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:21 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 09:11:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:22 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 09:11:23 np0005592157 nova_compute[245707]: 2026-01-22 14:11:23.566 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:23 np0005592157 nova_compute[245707]: 2026-01-22 14:11:23.649 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:24.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:24.592 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 09:11:25 np0005592157 nova_compute[245707]: 2026-01-22 14:11:25.597 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:26.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 09:11:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:28.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:28 np0005592157 podman[266137]: 2026-01-22 14:11:28.340903994 +0000 UTC m=+0.064945351 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:11:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:28.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:28 np0005592157 nova_compute[245707]: 2026-01-22 14:11:28.568 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:28 np0005592157 nova_compute[245707]: 2026-01-22 14:11:28.589 245711 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091073.5872233, 37c19c36-0359-4d64-a1c8-2ed3def24e7e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:11:28 np0005592157 nova_compute[245707]: 2026-01-22 14:11:28.589 245711 INFO nova.compute.manager [-] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:11:28 np0005592157 nova_compute[245707]: 2026-01-22 14:11:28.651 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:28 np0005592157 nova_compute[245707]: 2026-01-22 14:11:28.657 245711 DEBUG nova.compute.manager [None req-47c0d0d2-0bef-427f-879c-08cb2bf1a858 - - - - - -] [instance: 37c19c36-0359-4d64-a1c8-2ed3def24e7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 09:11:29 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:30.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:30.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:11:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:32.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:32 np0005592157 nova_compute[245707]: 2026-01-22 14:11:32.308 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:32.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:33 np0005592157 nova_compute[245707]: 2026-01-22 14:11:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:33 np0005592157 nova_compute[245707]: 2026-01-22 14:11:33.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:11:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:11:33 np0005592157 nova_compute[245707]: 2026-01-22 14:11:33.571 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:33 np0005592157 nova_compute[245707]: 2026-01-22 14:11:33.653 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:11:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:36 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:36 np0005592157 podman[266213]: 2026-01-22 14:11:36.371052723 +0000 UTC m=+0.109103293 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 09:11:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:36.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.279 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.280 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.280 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.280 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:11:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987265266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.731 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.953 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.955 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.956 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:37 np0005592157 nova_compute[245707]: 2026-01-22 14:11:37.956 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:38.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.318 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.319 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.320 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.320 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:11:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:38.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.485 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.575 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:38 np0005592157 nova_compute[245707]: 2026-01-22 14:11:38.655 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569152983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:39 np0005592157 nova_compute[245707]: 2026-01-22 14:11:39.024 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:39 np0005592157 nova_compute[245707]: 2026-01-22 14:11:39.032 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:39 np0005592157 nova_compute[245707]: 2026-01-22 14:11:39.054 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:39 np0005592157 nova_compute[245707]: 2026-01-22 14:11:39.078 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:11:39 np0005592157 nova_compute[245707]: 2026-01-22 14:11:39.079 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.075 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.114 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.115 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.115 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:11:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:40.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.135 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.136 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.136 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.137 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:40 np0005592157 nova_compute[245707]: 2026-01-22 14:11:40.138 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:40 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:40.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.296 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.297 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.327 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:11:41 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:41 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.464 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.465 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.472 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.473 245711 INFO nova.compute.claims [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:11:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 09:11:41 np0005592157 nova_compute[245707]: 2026-01-22 14:11:41.686 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:42.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433737356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.174 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.180 245711 DEBUG nova.compute.provider_tree [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.268 245711 DEBUG nova.scheduler.client.report [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.379 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.381 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:11:42 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.441 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.442 245711 DEBUG nova.network.neutron [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:11:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.472 245711 INFO nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.496 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.626 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.628 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.628 245711 INFO nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Creating image(s)#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.665 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.706 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.747 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.752 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.849 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.850 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.851 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.851 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.888 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:42 np0005592157 nova_compute[245707]: 2026-01-22 14:11:42.894 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.192 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.280 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] resizing rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:11:43 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.423 245711 DEBUG nova.objects.instance [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lazy-loading 'migration_context' on Instance uuid 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.473 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.474 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Ensure instance console log exists: /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.475 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.476 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.477 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.611 245711 DEBUG nova.network.neutron [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.611 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.613 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.614 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.622 245711 WARNING nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.631 245711 DEBUG nova.virt.libvirt.host [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.632 245711 DEBUG nova.virt.libvirt.host [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.635 245711 DEBUG nova.virt.libvirt.host [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.635 245711 DEBUG nova.virt.libvirt.host [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.637 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.638 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.638 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.639 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.639 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.640 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.640 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.641 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.641 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.641 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.642 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.642 245711 DEBUG nova.virt.hardware [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.648 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:43 np0005592157 nova_compute[245707]: 2026-01-22 14:11:43.669 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:11:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018484842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.099 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.144 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.150 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:44 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:44.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:11:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002379441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.628 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.631 245711 DEBUG nova.objects.instance [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.661 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <uuid>3b978b37-c3c4-4c2f-83ed-6e215e0c43f5</uuid>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <name>instance-00000009</name>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <memory>131072</memory>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <vcpu>1</vcpu>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <metadata>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1821256274</nova:name>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:creationTime>2026-01-22 14:11:43</nova:creationTime>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:flavor name="m1.nano">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:memory>128</nova:memory>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:disk>1</nova:disk>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:swap>0</nova:swap>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </nova:flavor>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:owner>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:user uuid="954d54358fc34858810c0e9b3866c2ad">tempest-DeleteServersAdminTestJSON-1718235342-project-member</nova:user>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <nova:project uuid="d066548ecdc24f11bb8d3b36c5301f7d">tempest-DeleteServersAdminTestJSON-1718235342</nova:project>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </nova:owner>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <nova:ports/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </nova:instance>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </metadata>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <sysinfo type="smbios">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <system>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="serial">3b978b37-c3c4-4c2f-83ed-6e215e0c43f5</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="uuid">3b978b37-c3c4-4c2f-83ed-6e215e0c43f5</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </system>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </sysinfo>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <os>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <boot dev="hd"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <smbios mode="sysinfo"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </os>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <features>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <acpi/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <apic/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <vmcoreinfo/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </features>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <clock offset="utc">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <timer name="hpet" present="no"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </clock>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <cpu mode="custom" match="exact">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <model>Nehalem</model>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  <devices>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <disk type="network" device="disk">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <target dev="vda" bus="virtio"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <disk type="network" device="cdrom">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <target dev="sda" bus="sata"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <serial type="pty">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <log file="/var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/console.log" append="off"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </serial>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <video>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </video>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <input type="tablet" bus="usb"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <rng model="virtio">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </rng>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <controller type="usb" index="0"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    <memballoon model="virtio">
Jan 22 09:11:44 np0005592157 nova_compute[245707]:      <stats period="10"/>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:    </memballoon>
Jan 22 09:11:44 np0005592157 nova_compute[245707]:  </devices>
Jan 22 09:11:44 np0005592157 nova_compute[245707]: </domain>
Jan 22 09:11:44 np0005592157 nova_compute[245707]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.757 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.758 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.759 245711 INFO nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Using config drive#033[00m
Jan 22 09:11:44 np0005592157 nova_compute[245707]: 2026-01-22 14:11:44.787 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.226 245711 INFO nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Creating config drive at /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config#033[00m
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.232 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6vf0a0y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.367 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6vf0a0y" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.399 245711 DEBUG nova.storage.rbd_utils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.405 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.475324) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105475640, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2233, "num_deletes": 251, "total_data_size": 3206508, "memory_usage": 3263256, "flush_reason": "Manual Compaction"}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 22 09:11:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105508371, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 3142489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33202, "largest_seqno": 35434, "table_properties": {"data_size": 3133257, "index_size": 5342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23921, "raw_average_key_size": 21, "raw_value_size": 3112819, "raw_average_value_size": 2796, "num_data_blocks": 230, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090934, "oldest_key_time": 1769090934, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 33034 microseconds, and 12350 cpu microseconds.
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.508527) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 3142489 bytes OK
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.508579) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.516176) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.516257) EVENT_LOG_v1 {"time_micros": 1769091105516244, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.516290) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3197058, prev total WAL file size 3197058, number of live WAL files 2.
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.518895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(3068KB)], [71(7663KB)]
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105519235, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10989652, "oldest_snapshot_seqno": -1}
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.618 245711 DEBUG oslo_concurrency.processutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:45 np0005592157 nova_compute[245707]: 2026-01-22 14:11:45.619 245711 INFO nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Deleting local config drive /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5/disk.config because it was imported into RBD.#033[00m
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 7641 keys, 9277310 bytes, temperature: kUnknown
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105623531, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9277310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231688, "index_size": 25414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203010, "raw_average_key_size": 26, "raw_value_size": 9097641, "raw_average_value_size": 1190, "num_data_blocks": 983, "num_entries": 7641, "num_filter_entries": 7641, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.624098) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9277310 bytes
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.626308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.2 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8156, records dropped: 515 output_compression: NoCompression
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.626326) EVENT_LOG_v1 {"time_micros": 1769091105626317, "job": 40, "event": "compaction_finished", "compaction_time_micros": 104468, "compaction_time_cpu_micros": 58466, "output_level": 6, "num_output_files": 1, "total_output_size": 9277310, "num_input_records": 8156, "num_output_records": 7641, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105627219, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105628872, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.518457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.629795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.629805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.629807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.629811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:11:45.629813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592157 systemd-machined[211644]: New machine qemu-2-instance-00000009.
Jan 22 09:11:45 np0005592157 systemd[1]: Started Virtual Machine qemu-2-instance-00000009.
Jan 22 09:11:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:46 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.061 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091107.060518, 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.063 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.067 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.068 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.074 245711 INFO nova.virt.libvirt.driver [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance spawned successfully.#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.075 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.100 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.105 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.117 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.117 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.118 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.118 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.118 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.119 245711 DEBUG nova.virt.libvirt.driver [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.142 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.142 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091107.0623589, 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.143 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] VM Started (Lifecycle Event)#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.209 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.213 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.239 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.267 245711 INFO nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Took 4.64 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.267 245711 DEBUG nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.355 245711 INFO nova.compute.manager [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Took 5.96 seconds to build instance.#033[00m
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:11:47
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', 'vms']
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:11:47 np0005592157 nova_compute[245707]: 2026-01-22 14:11:47.378 245711 DEBUG oslo_concurrency.lockutils [None req-f258a14f-7fb4-4aa6-a3e2-1ff35d19c4c6 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 09:11:47 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:47.580 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:47.581 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:11:47.581 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:48.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:48.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:48 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:48 np0005592157 nova_compute[245707]: 2026-01-22 14:11:48.616 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:48 np0005592157 nova_compute[245707]: 2026-01-22 14:11:48.672 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 91 op/s
Jan 22 09:11:49 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:50.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:50 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.721 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Acquiring lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.722 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.722 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Acquiring lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.723 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.723 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.725 245711 INFO nova.compute.manager [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Terminating instance#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.726 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Acquiring lock "refresh_cache-3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.727 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Acquired lock "refresh_cache-3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:11:50 np0005592157 nova_compute[245707]: 2026-01-22 14:11:50.727 245711 DEBUG nova.network.neutron [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:11:51 np0005592157 nova_compute[245707]: 2026-01-22 14:11:51.064 245711 DEBUG nova.network.neutron [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:11:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 22 09:11:51 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:51 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:52.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.214 245711 DEBUG nova.network.neutron [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.236 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Releasing lock "refresh_cache-3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.237 245711 DEBUG nova.compute.manager [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:11:52 np0005592157 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 22 09:11:52 np0005592157 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Consumed 6.758s CPU time.
Jan 22 09:11:52 np0005592157 systemd-machined[211644]: Machine qemu-2-instance-00000009 terminated.
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.462 245711 INFO nova.virt.libvirt.driver [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance destroyed successfully.#033[00m
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.463 245711 DEBUG nova.objects.instance [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lazy-loading 'resources' on Instance uuid 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:52 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.905 245711 INFO nova.virt.libvirt.driver [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Deleting instance files /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_del#033[00m
Jan 22 09:11:52 np0005592157 nova_compute[245707]: 2026-01-22 14:11:52.906 245711 INFO nova.virt.libvirt.driver [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Deletion of /var/lib/nova/instances/3b978b37-c3c4-4c2f-83ed-6e215e0c43f5_del complete#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.053 245711 INFO nova.compute.manager [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.054 245711 DEBUG oslo.service.loopingcall [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.055 245711 DEBUG nova.compute.manager [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.055 245711 DEBUG nova.network.neutron [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:11:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.563 245711 DEBUG nova.network.neutron [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.582 245711 DEBUG nova.network.neutron [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:11:53 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.603 245711 INFO nova.compute.manager [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Took 0.55 seconds to deallocate network for instance.#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.657 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.673 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.676 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.676 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:53 np0005592157 nova_compute[245707]: 2026-01-22 14:11:53.847 245711 DEBUG oslo_concurrency.processutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:54.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093419999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.386 245711 DEBUG oslo_concurrency.processutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.396 245711 DEBUG nova.compute.provider_tree [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.447 245711 DEBUG nova.scheduler.client.report [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.480 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.514 245711 INFO nova.scheduler.client.report [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Deleted allocations for instance 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5#033[00m
Jan 22 09:11:54 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:54 np0005592157 nova_compute[245707]: 2026-01-22 14:11:54.630 245711 DEBUG oslo_concurrency.lockutils [None req-8b6504c4-4322-42da-86ea-318a1bc50a22 4add5cd4b04948889e4ad73f610bfce9 44dddc75a5a94fc1b53b8964d4f408f5 - - default default] Lock "3b978b37-c3c4-4c2f-83ed-6e215e0c43f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 22 09:11:55 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:55 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:11:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:11:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:56 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:11:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7646 writes, 35K keys, 7645 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7646 writes, 7645 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1889 writes, 8741 keys, 1889 commit groups, 1.0 writes per commit group, ingest: 11.13 MB, 0.02 MB/s#012Interval WAL: 1889 writes, 1889 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.5      0.46              0.19        20    0.023       0      0       0.0       0.0#012  L6      1/0    8.85 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.9    108.4     90.2      1.79              0.68        19    0.094    118K    11K       0.0       0.0#012 Sum      1/0    8.85 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.9     86.1     90.0      2.26              0.87        39    0.058    118K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.7     90.9     94.1      0.59              0.25        10    0.059     39K   3570       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    108.4     90.2      1.79              0.68        19    0.094    118K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.2      0.46              0.19        19    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.08 MB/s write, 0.19 GB read, 0.08 MB/s read, 2.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 21.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000257 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1142,20.36 MB,6.69867%) FilterBlock(40,345.36 KB,0.110942%) IndexBlock(40,511.02 KB,0.164157%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:11:57 np0005592157 ovn_controller[146940]: 2026-01-22T14:11:57Z|00035|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 22 09:11:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 09:11:57 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:58.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:11:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:11:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:11:58 np0005592157 nova_compute[245707]: 2026-01-22 14:11:58.660 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:58 np0005592157 nova_compute[245707]: 2026-01-22 14:11:58.675 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:11:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9e3c2bb1-785a-47c0-b1a0-cebe19376c38 does not exist
Jan 22 09:11:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1d5525f7-0445-4d96-966c-53375a4d425f does not exist
Jan 22 09:11:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c5d75490-50af-4064-8a51-72737071da13 does not exist
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:11:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:11:58 np0005592157 podman[266912]: 2026-01-22 14:11:58.94702227 +0000 UTC m=+0.066387047 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.43212886 +0000 UTC m=+0.073928574 container create fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.386340169 +0000 UTC m=+0.028139933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:11:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 09:11:59 np0005592157 systemd[1]: Started libpod-conmon-fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51.scope.
Jan 22 09:11:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.573725392 +0000 UTC m=+0.215525136 container init fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.585560358 +0000 UTC m=+0.227360072 container start fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.589221889 +0000 UTC m=+0.231021603 container attach fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:11:59 np0005592157 reverent_jennings[267061]: 167 167
Jan 22 09:11:59 np0005592157 systemd[1]: libpod-fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51.scope: Deactivated successfully.
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.593318951 +0000 UTC m=+0.235118665 container died fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:11:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-01d3b45cbd51a52e61a697aa199c21d00931a82326f90471171a6562e66a3d0c-merged.mount: Deactivated successfully.
Jan 22 09:11:59 np0005592157 podman[267044]: 2026-01-22 14:11:59.64056914 +0000 UTC m=+0.282368854 container remove fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:11:59 np0005592157 systemd[1]: libpod-conmon-fe876374b15f29d34855b1ad069f72339640d5475e359332bbb6d3f6db7c6c51.scope: Deactivated successfully.
Jan 22 09:11:59 np0005592157 podman[267085]: 2026-01-22 14:11:59.832736263 +0000 UTC m=+0.067450923 container create 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 09:11:59 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:11:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:11:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:11:59 np0005592157 systemd[1]: Started libpod-conmon-78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a.scope.
Jan 22 09:11:59 np0005592157 podman[267085]: 2026-01-22 14:11:59.793744751 +0000 UTC m=+0.028459401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:11:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:11:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:11:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:11:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:11:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:11:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:11:59 np0005592157 podman[267085]: 2026-01-22 14:11:59.913862637 +0000 UTC m=+0.148577287 container init 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:11:59 np0005592157 podman[267085]: 2026-01-22 14:11:59.921269802 +0000 UTC m=+0.155984432 container start 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:11:59 np0005592157 podman[267085]: 2026-01-22 14:11:59.924851002 +0000 UTC m=+0.159565662 container attach 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:12:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:00.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:00.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:00 np0005592157 hardcore_northcutt[267102]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:12:00 np0005592157 hardcore_northcutt[267102]: --> relative data size: 1.0
Jan 22 09:12:00 np0005592157 hardcore_northcutt[267102]: --> All data devices are unavailable
Jan 22 09:12:00 np0005592157 systemd[1]: libpod-78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a.scope: Deactivated successfully.
Jan 22 09:12:00 np0005592157 podman[267085]: 2026-01-22 14:12:00.800407862 +0000 UTC m=+1.035122492 container died 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:12:00 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:00 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fe66ac2a7826fb8b05d27c8f1bbbf529c88c45e11faf3b83b306ec665a91fd0a-merged.mount: Deactivated successfully.
Jan 22 09:12:01 np0005592157 podman[267085]: 2026-01-22 14:12:01.13784104 +0000 UTC m=+1.372555670 container remove 78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:12:01 np0005592157 systemd[1]: libpod-conmon-78ef82d84d963818e628cf66f2d822e91afff8911dca87facbde70fe67ad096a.scope: Deactivated successfully.
Jan 22 09:12:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 589 KiB/s rd, 13 KiB/s wr, 51 op/s
Jan 22 09:12:01 np0005592157 podman[267276]: 2026-01-22 14:12:01.897711146 +0000 UTC m=+0.105696318 container create eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:12:01 np0005592157 podman[267276]: 2026-01-22 14:12:01.821497664 +0000 UTC m=+0.029482836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:12:01 np0005592157 systemd[1]: Started libpod-conmon-eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709.scope.
Jan 22 09:12:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:12:01 np0005592157 podman[267276]: 2026-01-22 14:12:01.996734276 +0000 UTC m=+0.204719468 container init eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:12:02 np0005592157 podman[267276]: 2026-01-22 14:12:02.005354331 +0000 UTC m=+0.213339503 container start eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:12:02 np0005592157 loving_antonelli[267293]: 167 167
Jan 22 09:12:02 np0005592157 systemd[1]: libpod-eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709.scope: Deactivated successfully.
Jan 22 09:12:02 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:02 np0005592157 podman[267276]: 2026-01-22 14:12:02.020643152 +0000 UTC m=+0.228628324 container attach eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 22 09:12:02 np0005592157 podman[267276]: 2026-01-22 14:12:02.021232137 +0000 UTC m=+0.229217319 container died eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:12:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6190e975bfad44c3a312176964e987e0fa46dfc1eb1fe9d8f17e3cb5df3cf6ae-merged.mount: Deactivated successfully.
Jan 22 09:12:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000074s ======
Jan 22 09:12:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:02.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Jan 22 09:12:02 np0005592157 podman[267276]: 2026-01-22 14:12:02.228655561 +0000 UTC m=+0.436640733 container remove eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:12:02 np0005592157 systemd[1]: libpod-conmon-eada4af3c5758d00246b69ffd4f5a1f58f2afb472a2faf61ee190dba066ac709.scope: Deactivated successfully.
Jan 22 09:12:02 np0005592157 podman[267320]: 2026-01-22 14:12:02.462889214 +0000 UTC m=+0.109938783 container create 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:12:02 np0005592157 podman[267320]: 2026-01-22 14:12:02.379956015 +0000 UTC m=+0.027005604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:12:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:02.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:02 np0005592157 systemd[1]: Started libpod-conmon-082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579.scope.
Jan 22 09:12:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:12:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e800b15d8a41c7f967402993f93c337c0c39715a85e805053c33a55a102292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e800b15d8a41c7f967402993f93c337c0c39715a85e805053c33a55a102292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e800b15d8a41c7f967402993f93c337c0c39715a85e805053c33a55a102292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e800b15d8a41c7f967402993f93c337c0c39715a85e805053c33a55a102292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:02 np0005592157 podman[267320]: 2026-01-22 14:12:02.570105159 +0000 UTC m=+0.217154758 container init 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 09:12:02 np0005592157 podman[267320]: 2026-01-22 14:12:02.578533599 +0000 UTC m=+0.225583168 container start 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:12:02 np0005592157 podman[267320]: 2026-01-22 14:12:02.584965 +0000 UTC m=+0.232014589 container attach 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:12:03 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:03 np0005592157 strange_jennings[267336]: {
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:    "0": [
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:        {
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "devices": [
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "/dev/loop3"
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            ],
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "lv_name": "ceph_lv0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "lv_size": "7511998464",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "name": "ceph_lv0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "tags": {
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.cluster_name": "ceph",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.crush_device_class": "",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.encrypted": "0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.osd_id": "0",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.type": "block",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:                "ceph.vdo": "0"
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            },
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "type": "block",
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:            "vg_name": "ceph_vg0"
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:        }
Jan 22 09:12:03 np0005592157 strange_jennings[267336]:    ]
Jan 22 09:12:03 np0005592157 strange_jennings[267336]: }
Jan 22 09:12:03 np0005592157 systemd[1]: libpod-082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579.scope: Deactivated successfully.
Jan 22 09:12:03 np0005592157 podman[267346]: 2026-01-22 14:12:03.466843407 +0000 UTC m=+0.026758448 container died 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:12:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 09:12:03 np0005592157 nova_compute[245707]: 2026-01-22 14:12:03.660 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-89e800b15d8a41c7f967402993f93c337c0c39715a85e805053c33a55a102292-merged.mount: Deactivated successfully.
Jan 22 09:12:03 np0005592157 nova_compute[245707]: 2026-01-22 14:12:03.677 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:03 np0005592157 podman[267346]: 2026-01-22 14:12:03.83580335 +0000 UTC m=+0.395718391 container remove 082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:12:03 np0005592157 systemd[1]: libpod-conmon-082505eb5669507e3df73fdbf2ebccc2e542a26ebebfa2db4466a33fc7d5f579.scope: Deactivated successfully.
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027181262213847013 of space, bias 1.0, pg target 0.8154378664154104 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:12:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 09:12:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:04 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:04.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.510101371 +0000 UTC m=+0.053678520 container create 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:12:04 np0005592157 systemd[1]: Started libpod-conmon-6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2.scope.
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.481172009 +0000 UTC m=+0.024749248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:12:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.770656171 +0000 UTC m=+0.314233320 container init 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.77823937 +0000 UTC m=+0.321816519 container start 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:12:04 np0005592157 funny_turing[267519]: 167 167
Jan 22 09:12:04 np0005592157 systemd[1]: libpod-6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2.scope: Deactivated successfully.
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.859545128 +0000 UTC m=+0.403122287 container attach 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.86125031 +0000 UTC m=+0.404827499 container died 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:12:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-60b1fd0272e742d75720bb1775a053ff5e822d742cf8b5e680bcc34b3f58e97f-merged.mount: Deactivated successfully.
Jan 22 09:12:04 np0005592157 podman[267503]: 2026-01-22 14:12:04.956109486 +0000 UTC m=+0.499686635 container remove 6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:12:04 np0005592157 systemd[1]: libpod-conmon-6da0734e4ae8e2a271cf77e1771596947decea2adc45cc9e4ee21d76a13a73b2.scope: Deactivated successfully.
Jan 22 09:12:05 np0005592157 podman[267547]: 2026-01-22 14:12:05.14748017 +0000 UTC m=+0.059572907 container create 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:12:05 np0005592157 systemd[1]: Started libpod-conmon-08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad.scope.
Jan 22 09:12:05 np0005592157 podman[267547]: 2026-01-22 14:12:05.111703778 +0000 UTC m=+0.023796545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:12:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:12:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765fd9c125922c5579113e787b899e6dc18605d004d429e68464ad432bfbbbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765fd9c125922c5579113e787b899e6dc18605d004d429e68464ad432bfbbbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765fd9c125922c5579113e787b899e6dc18605d004d429e68464ad432bfbbbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765fd9c125922c5579113e787b899e6dc18605d004d429e68464ad432bfbbbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:12:05 np0005592157 podman[267547]: 2026-01-22 14:12:05.248187212 +0000 UTC m=+0.160279979 container init 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:12:05 np0005592157 podman[267547]: 2026-01-22 14:12:05.259803612 +0000 UTC m=+0.171896349 container start 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:12:05 np0005592157 podman[267547]: 2026-01-22 14:12:05.263444863 +0000 UTC m=+0.175537600 container attach 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:12:05 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]: {
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:        "osd_id": 0,
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:        "type": "bluestore"
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]:    }
Jan 22 09:12:06 np0005592157 infallible_williamson[267563]: }
Jan 22 09:12:06 np0005592157 systemd[1]: libpod-08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad.scope: Deactivated successfully.
Jan 22 09:12:06 np0005592157 conmon[267563]: conmon 08d69d23a0085d0d7d17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad.scope/container/memory.events
Jan 22 09:12:06 np0005592157 podman[267547]: 2026-01-22 14:12:06.151186547 +0000 UTC m=+1.063279284 container died 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:12:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e765fd9c125922c5579113e787b899e6dc18605d004d429e68464ad432bfbbbe-merged.mount: Deactivated successfully.
Jan 22 09:12:06 np0005592157 podman[267547]: 2026-01-22 14:12:06.211869371 +0000 UTC m=+1.123962108 container remove 08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williamson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:12:06 np0005592157 systemd[1]: libpod-conmon-08d69d23a0085d0d7d17faddbb6b1950523e6da6816be8efd16bddeb93a3e0ad.scope: Deactivated successfully.
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:12:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bca2003a-dd10-4a87-8d93-b34b5f8e15d7 does not exist
Jan 22 09:12:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 30403086-ee95-4419-bdfa-00715888a095 does not exist
Jan 22 09:12:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c3b603d0-946a-446f-b90f-514f07d45d25 does not exist
Jan 22 09:12:06 np0005592157 podman[267598]: 2026-01-22 14:12:06.660523631 +0000 UTC m=+0.094551958 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 09:12:07 np0005592157 nova_compute[245707]: 2026-01-22 14:12:07.459 245711 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091112.4577587, 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:12:07 np0005592157 nova_compute[245707]: 2026-01-22 14:12:07.460 245711 INFO nova.compute.manager [-] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:12:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:12:07 np0005592157 nova_compute[245707]: 2026-01-22 14:12:07.516 245711 DEBUG nova.compute.manager [None req-182a22c4-376d-48f9-bfe2-60505c17e092 - - - - - -] [instance: 3b978b37-c3c4-4c2f-83ed-6e215e0c43f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:12:07 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:08.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:08 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:08 np0005592157 nova_compute[245707]: 2026-01-22 14:12:08.662 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:08 np0005592157 nova_compute[245707]: 2026-01-22 14:12:08.679 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:12:09 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:10.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:10.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:10 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:12:11 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:12.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:12.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:12 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:12:13 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.680 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.681 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.681 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.681 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.720 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:13 np0005592157 nova_compute[245707]: 2026-01-22 14:12:13.721 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:14 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:12:15 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:16.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:17 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:18.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:18.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.723 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.725 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.725 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.725 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.733 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:18 np0005592157 nova_compute[245707]: 2026-01-22 14:12:18.734 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:18 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:19 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:21 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:21 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:22 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.451 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "b8bec212-84ad-47fd-9608-2cc1999da6c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.451 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "b8bec212-84ad-47fd-9608-2cc1999da6c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.466 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:12:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.552 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.553 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.625 245711 DEBUG nova.virt.hardware [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.625 245711 INFO nova.compute.claims [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:12:22 np0005592157 nova_compute[245707]: 2026-01-22 14:12:22.888 245711 DEBUG oslo_concurrency.processutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692563106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:23 np0005592157 nova_compute[245707]: 2026-01-22 14:12:23.398 245711 DEBUG oslo_concurrency.processutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:23 np0005592157 nova_compute[245707]: 2026-01-22 14:12:23.410 245711 DEBUG nova.compute.provider_tree [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:12:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:23 np0005592157 nova_compute[245707]: 2026-01-22 14:12:23.634 245711 DEBUG nova.scheduler.client.report [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:12:23 np0005592157 nova_compute[245707]: 2026-01-22 14:12:23.735 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:23 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:24.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:24 np0005592157 nova_compute[245707]: 2026-01-22 14:12:24.725 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:24 np0005592157 nova_compute[245707]: 2026-01-22 14:12:24.726 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:12:24 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:25 np0005592157 nova_compute[245707]: 2026-01-22 14:12:25.437 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:12:25 np0005592157 nova_compute[245707]: 2026-01-22 14:12:25.438 245711 DEBUG nova.network.neutron [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:12:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:25 np0005592157 nova_compute[245707]: 2026-01-22 14:12:25.684 245711 INFO nova.virt.libvirt.driver [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:12:25 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.164 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:12:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:26.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.531 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.533 245711 DEBUG nova.virt.libvirt.driver [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.533 245711 INFO nova.virt.libvirt.driver [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Creating image(s)#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.566 245711 DEBUG nova.storage.rbd_utils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image b8bec212-84ad-47fd-9608-2cc1999da6c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.600 245711 DEBUG nova.storage.rbd_utils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image b8bec212-84ad-47fd-9608-2cc1999da6c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.638 245711 DEBUG nova.storage.rbd_utils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image b8bec212-84ad-47fd-9608-2cc1999da6c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.643 245711 DEBUG oslo_concurrency.processutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.711 245711 DEBUG oslo_concurrency.processutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.713 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.714 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.714 245711 DEBUG oslo_concurrency.lockutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.754 245711 DEBUG nova.storage.rbd_utils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image b8bec212-84ad-47fd-9608-2cc1999da6c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:26 np0005592157 nova_compute[245707]: 2026-01-22 14:12:26.761 245711 DEBUG oslo_concurrency.processutils [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 b8bec212-84ad-47fd-9608-2cc1999da6c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:26 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:27 np0005592157 nova_compute[245707]: 2026-01-22 14:12:27.242 245711 DEBUG nova.network.neutron [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:12:27 np0005592157 nova_compute[245707]: 2026-01-22 14:12:27.243 245711 DEBUG nova.compute.manager [None req-fbc49964-c6a2-4144-8e62-8d45e60567ac 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:12:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:28 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:28 np0005592157 nova_compute[245707]: 2026-01-22 14:12:28.738 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 223 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 391 KiB/s wr, 1 op/s
Jan 22 09:12:29 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:29 np0005592157 podman[267855]: 2026-01-22 14:12:29.723252123 +0000 UTC m=+0.090747754 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:12:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:31 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 09:12:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:32 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:12:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:32.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:12:33 np0005592157 nova_compute[245707]: 2026-01-22 14:12:33.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:33 np0005592157 nova_compute[245707]: 2026-01-22 14:12:33.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:12:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 09:12:33 np0005592157 nova_compute[245707]: 2026-01-22 14:12:33.741 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:33 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:34.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:34 np0005592157 nova_compute[245707]: 2026-01-22 14:12:34.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:35 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 09:12:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:36.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:36 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:36 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:36.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:37 np0005592157 podman[267928]: 2026-01-22 14:12:37.359967689 +0000 UTC m=+0.099716388 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:12:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 09:12:37 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:38.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:38.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:38 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.743 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.744 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.744 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.744 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.745 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:38 np0005592157 nova_compute[245707]: 2026-01-22 14:12:38.746 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.301 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.302 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.302 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.303 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.303 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 09:12:39 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207568101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:39 np0005592157 nova_compute[245707]: 2026-01-22 14:12:39.814 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.040 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.042 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4805MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.042 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.042 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:40.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.244 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.245 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.245 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.246 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.246 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.370 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:40.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007474767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.830 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.838 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:40 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:40 np0005592157 nova_compute[245707]: 2026-01-22 14:12:40.994 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:12:41 np0005592157 nova_compute[245707]: 2026-01-22 14:12:41.029 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:12:41 np0005592157 nova_compute[245707]: 2026-01-22 14:12:41.030 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 09:12:42 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:42.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.029 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.031 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.031 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.031 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:12:43 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.064 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.065 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.065 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.065 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.066 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.747 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.749 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.749 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.749 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.783 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:43 np0005592157 nova_compute[245707]: 2026-01-22 14:12:43.784 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:44 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:44.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:44.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:45 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:46 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:46 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:12:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:46.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:47 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:12:47
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes', 'images']
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:12:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:12:47.581 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:12:47.582 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:12:47.582 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:48 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:48.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.786 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.787 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.788 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.788 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.830 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:48 np0005592157 nova_compute[245707]: 2026-01-22 14:12:48.831 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:49 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:50 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:50.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:50.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:51 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:51 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:52 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:52.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:53 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:53 np0005592157 nova_compute[245707]: 2026-01-22 14:12:53.831 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:54 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:54.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:55 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:56.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:56 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:56 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:56.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:57 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:57 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:12:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:58.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:12:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:12:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:58.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:58 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.833 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.835 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.835 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.835 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.836 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:12:58 np0005592157 nova_compute[245707]: 2026-01-22 14:12:58.837 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:12:59 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:00.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:00 np0005592157 podman[268063]: 2026-01-22 14:13:00.328481195 +0000 UTC m=+0.064737909 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:13:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:00 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:02 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:02.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:02.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:03 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.838 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.840 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.840 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.840 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.892 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:03 np0005592157 nova_compute[245707]: 2026-01-22 14:13:03.894 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004369903626930951 of space, bias 1.0, pg target 1.3109710880792853 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:13:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:13:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:04.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:04 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:04.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:05 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:06 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:06 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:07 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:07 np0005592157 podman[268229]: 2026-01-22 14:13:07.830668136 +0000 UTC m=+0.108630787 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 09:13:07 np0005592157 podman[268279]: 2026-01-22 14:13:07.937146068 +0000 UTC m=+0.070154435 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:13:08 np0005592157 podman[268279]: 2026-01-22 14:13:08.063530327 +0000 UTC m=+0.196538684 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:13:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:08.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:08 np0005592157 podman[268431]: 2026-01-22 14:13:08.77200207 +0000 UTC m=+0.062733179 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:13:08 np0005592157 podman[268431]: 2026-01-22 14:13:08.78638602 +0000 UTC m=+0.077117109 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:13:08 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.895 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.898 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.898 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.898 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.937 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:08 np0005592157 nova_compute[245707]: 2026-01-22 14:13:08.938 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:09 np0005592157 podman[268499]: 2026-01-22 14:13:09.078595735 +0000 UTC m=+0.059726794 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Jan 22 09:13:09 np0005592157 podman[268499]: 2026-01-22 14:13:09.090055222 +0000 UTC m=+0.071186251 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, release=1793, vcs-type=git, version=2.2.4)
Jan 22 09:13:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:13:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:13:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b75d6808-1d09-43ec-9034-1990f9432caf does not exist
Jan 22 09:13:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev adfbd6d5-0c61-4bda-80de-a6037c93500a does not exist
Jan 22 09:13:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bbf90865-34e1-40f5-b0b2-6e0644657938 does not exist
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:13:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:10.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.132083113 +0000 UTC m=+0.039162180 container create bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:13:11 np0005592157 systemd[1]: Started libpod-conmon-bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609.scope.
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.115651052 +0000 UTC m=+0.022730139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.230903604 +0000 UTC m=+0.137982721 container init bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.238123554 +0000 UTC m=+0.145202621 container start bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.242157195 +0000 UTC m=+0.149236302 container attach bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:13:11 np0005592157 bold_chandrasekhar[268874]: 167 167
Jan 22 09:13:11 np0005592157 systemd[1]: libpod-bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609.scope: Deactivated successfully.
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.254303599 +0000 UTC m=+0.161382666 container died bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:13:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-faa3e8fdad653b29cf78c936b1dbf3c89a230e32520ac15ca0294f653df82d10-merged.mount: Deactivated successfully.
Jan 22 09:13:11 np0005592157 podman[268857]: 2026-01-22 14:13:11.300459143 +0000 UTC m=+0.207538200 container remove bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:13:11 np0005592157 systemd[1]: libpod-conmon-bb8b78aa0af72b6491df09bebbfb1ff97600f0bf5e697aceca1f677705005609.scope: Deactivated successfully.
Jan 22 09:13:11 np0005592157 podman[268897]: 2026-01-22 14:13:11.489001586 +0000 UTC m=+0.054943844 container create 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:13:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:11 np0005592157 systemd[1]: Started libpod-conmon-5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee.scope.
Jan 22 09:13:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:11 np0005592157 podman[268897]: 2026-01-22 14:13:11.464723309 +0000 UTC m=+0.030665617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:11 np0005592157 podman[268897]: 2026-01-22 14:13:11.584774981 +0000 UTC m=+0.150717269 container init 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:13:11 np0005592157 podman[268897]: 2026-01-22 14:13:11.591270893 +0000 UTC m=+0.157213151 container start 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:13:11 np0005592157 podman[268897]: 2026-01-22 14:13:11.599473368 +0000 UTC m=+0.165415626 container attach 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 22 09:13:11 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:11 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:13:11 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:12 np0005592157 peaceful_pike[268915]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:13:12 np0005592157 peaceful_pike[268915]: --> relative data size: 1.0
Jan 22 09:13:12 np0005592157 peaceful_pike[268915]: --> All data devices are unavailable
Jan 22 09:13:12 np0005592157 systemd[1]: libpod-5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee.scope: Deactivated successfully.
Jan 22 09:13:12 np0005592157 conmon[268915]: conmon 5b87236193afa250cdd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee.scope/container/memory.events
Jan 22 09:13:12 np0005592157 podman[268897]: 2026-01-22 14:13:12.433975872 +0000 UTC m=+0.999918130 container died 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:13:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-91322a917dca48c62bf03dd5c23b24b2bcfd768b393e0428c4c12aadd980223d-merged.mount: Deactivated successfully.
Jan 22 09:13:12 np0005592157 podman[268897]: 2026-01-22 14:13:12.949596523 +0000 UTC m=+1.515538781 container remove 5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:13:12 np0005592157 systemd[1]: libpod-conmon-5b87236193afa250cdd8bd27c02e321e1013ee3694709e7add0231dab0a483ee.scope: Deactivated successfully.
Jan 22 09:13:13 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.596387273 +0000 UTC m=+0.039224062 container create 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:13:13 np0005592157 systemd[1]: Started libpod-conmon-2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16.scope.
Jan 22 09:13:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.579293756 +0000 UTC m=+0.022130565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.677212374 +0000 UTC m=+0.120049253 container init 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.684710961 +0000 UTC m=+0.127547750 container start 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:13:13 np0005592157 elegant_heyrovsky[269098]: 167 167
Jan 22 09:13:13 np0005592157 systemd[1]: libpod-2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16.scope: Deactivated successfully.
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.690412194 +0000 UTC m=+0.133249023 container attach 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.690913926 +0000 UTC m=+0.133750755 container died 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:13:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7a674b6c0d5c24668b21d2f87f33419d51a38ea3133550ca718e46af395f46de-merged.mount: Deactivated successfully.
Jan 22 09:13:13 np0005592157 podman[269082]: 2026-01-22 14:13:13.784125807 +0000 UTC m=+0.226962616 container remove 2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:13:13 np0005592157 systemd[1]: libpod-conmon-2a16f6436e24df5de170e00439570c60163ef69ab4365adf53431d5c9695ab16.scope: Deactivated successfully.
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.939 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.943 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.943 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.943 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.992 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:13 np0005592157 nova_compute[245707]: 2026-01-22 14:13:13.993 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:14 np0005592157 podman[269123]: 2026-01-22 14:13:14.024367293 +0000 UTC m=+0.103715854 container create 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:13:14 np0005592157 podman[269123]: 2026-01-22 14:13:13.995728757 +0000 UTC m=+0.075077338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:14 np0005592157 systemd[1]: Started libpod-conmon-9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed.scope.
Jan 22 09:13:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b15f8d30d54dcfe810f6c0b26744d8b5e682fa82bfa6c4f76125c196dcf7e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b15f8d30d54dcfe810f6c0b26744d8b5e682fa82bfa6c4f76125c196dcf7e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b15f8d30d54dcfe810f6c0b26744d8b5e682fa82bfa6c4f76125c196dcf7e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b15f8d30d54dcfe810f6c0b26744d8b5e682fa82bfa6c4f76125c196dcf7e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:14 np0005592157 podman[269123]: 2026-01-22 14:13:14.128484176 +0000 UTC m=+0.207832767 container init 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:13:14 np0005592157 podman[269123]: 2026-01-22 14:13:14.137906752 +0000 UTC m=+0.217255323 container start 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:13:14 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:14 np0005592157 podman[269123]: 2026-01-22 14:13:14.142401004 +0000 UTC m=+0.221749565 container attach 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:13:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]: {
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:    "0": [
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:        {
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "devices": [
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "/dev/loop3"
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            ],
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "lv_name": "ceph_lv0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "lv_size": "7511998464",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "name": "ceph_lv0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "tags": {
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.cluster_name": "ceph",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.crush_device_class": "",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.encrypted": "0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.osd_id": "0",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.type": "block",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:                "ceph.vdo": "0"
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            },
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "type": "block",
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:            "vg_name": "ceph_vg0"
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:        }
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]:    ]
Jan 22 09:13:14 np0005592157 happy_brahmagupta[269138]: }
Jan 22 09:13:15 np0005592157 systemd[1]: libpod-9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed.scope: Deactivated successfully.
Jan 22 09:13:15 np0005592157 podman[269123]: 2026-01-22 14:13:15.010145668 +0000 UTC m=+1.089494249 container died 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:13:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7b15f8d30d54dcfe810f6c0b26744d8b5e682fa82bfa6c4f76125c196dcf7e5e-merged.mount: Deactivated successfully.
Jan 22 09:13:15 np0005592157 podman[269123]: 2026-01-22 14:13:15.074481176 +0000 UTC m=+1.153829737 container remove 9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:13:15 np0005592157 systemd[1]: libpod-conmon-9862281989f9ff66bd72ae104a906cddd992baf3c78d11c63f16d655342a8eed.scope: Deactivated successfully.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.714864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195715157, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1290, "num_deletes": 256, "total_data_size": 1725853, "memory_usage": 1754432, "flush_reason": "Manual Compaction"}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195728678, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1687738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35435, "largest_seqno": 36724, "table_properties": {"data_size": 1682028, "index_size": 2850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14642, "raw_average_key_size": 20, "raw_value_size": 1669526, "raw_average_value_size": 2351, "num_data_blocks": 124, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091106, "oldest_key_time": 1769091106, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 13820 microseconds, and 6271 cpu microseconds.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.728807) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1687738 bytes OK
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.728847) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.730539) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.730569) EVENT_LOG_v1 {"time_micros": 1769091195730562, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.730597) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1719903, prev total WAL file size 1719903, number of live WAL files 2.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.731605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1648KB)], [74(9059KB)]
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195731739, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10965048, "oldest_snapshot_seqno": -1}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 7826 keys, 10801358 bytes, temperature: kUnknown
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195829017, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10801358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10753175, "index_size": 27527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 208473, "raw_average_key_size": 26, "raw_value_size": 10614496, "raw_average_value_size": 1356, "num_data_blocks": 1068, "num_entries": 7826, "num_filter_entries": 7826, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.829410) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10801358 bytes
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.831355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.6 rd, 110.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 8351, records dropped: 525 output_compression: NoCompression
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.831379) EVENT_LOG_v1 {"time_micros": 1769091195831366, "job": 42, "event": "compaction_finished", "compaction_time_micros": 97392, "compaction_time_cpu_micros": 37838, "output_level": 6, "num_output_files": 1, "total_output_size": 10801358, "num_input_records": 8351, "num_output_records": 7826, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195831976, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195833671, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.731387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.833764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.833775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.833778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.833781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:13:15.833784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.833282327 +0000 UTC m=+0.055093449 container create c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:13:15 np0005592157 systemd[1]: Started libpod-conmon-c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222.scope.
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.804650401 +0000 UTC m=+0.026461553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.936902437 +0000 UTC m=+0.158713579 container init c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.944517958 +0000 UTC m=+0.166329090 container start c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.95102594 +0000 UTC m=+0.172837072 container attach c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:13:15 np0005592157 dreamy_euclid[269320]: 167 167
Jan 22 09:13:15 np0005592157 systemd[1]: libpod-c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222.scope: Deactivated successfully.
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.953843031 +0000 UTC m=+0.175654163 container died c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:13:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-30197488d223ef894a006b846190b9bee417794ce2e43cdf7bbf41d2ff378a36-merged.mount: Deactivated successfully.
Jan 22 09:13:15 np0005592157 podman[269306]: 2026-01-22 14:13:15.997983064 +0000 UTC m=+0.219794226 container remove c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_euclid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:13:16 np0005592157 systemd[1]: libpod-conmon-c5ed9bb447bb158569df7c1f1b3414db3d892a63c6e509f47a3314d05aad4222.scope: Deactivated successfully.
Jan 22 09:13:16 np0005592157 podman[269347]: 2026-01-22 14:13:16.211214815 +0000 UTC m=+0.072154375 container create 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:13:16 np0005592157 systemd[1]: Started libpod-conmon-3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535.scope.
Jan 22 09:13:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:16 np0005592157 podman[269347]: 2026-01-22 14:13:16.183036361 +0000 UTC m=+0.043975981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:13:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:16.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:13:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1851fac6babe58d0307fc9a9cea3ccdee8591cc29cec9401caa067cd540b200f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1851fac6babe58d0307fc9a9cea3ccdee8591cc29cec9401caa067cd540b200f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1851fac6babe58d0307fc9a9cea3ccdee8591cc29cec9401caa067cd540b200f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1851fac6babe58d0307fc9a9cea3ccdee8591cc29cec9401caa067cd540b200f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:13:16 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:16 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:16 np0005592157 podman[269347]: 2026-01-22 14:13:16.310509168 +0000 UTC m=+0.171448758 container init 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:13:16 np0005592157 podman[269347]: 2026-01-22 14:13:16.326668922 +0000 UTC m=+0.187608472 container start 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:13:16 np0005592157 podman[269347]: 2026-01-22 14:13:16.33058916 +0000 UTC m=+0.191528700 container attach 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]: {
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:        "osd_id": 0,
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:        "type": "bluestore"
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]:    }
Jan 22 09:13:17 np0005592157 admiring_shamir[269364]: }
Jan 22 09:13:17 np0005592157 systemd[1]: libpod-3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535.scope: Deactivated successfully.
Jan 22 09:13:17 np0005592157 podman[269347]: 2026-01-22 14:13:17.243318409 +0000 UTC m=+1.104257959 container died 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:13:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1851fac6babe58d0307fc9a9cea3ccdee8591cc29cec9401caa067cd540b200f-merged.mount: Deactivated successfully.
Jan 22 09:13:17 np0005592157 podman[269347]: 2026-01-22 14:13:17.315911424 +0000 UTC m=+1.176850974 container remove 3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:13:17 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:17 np0005592157 systemd[1]: libpod-conmon-3753b9a3abc667edf9ea0d0d4ddc2f79118fd4b9caad89df0fb0399d36d0a535.scope: Deactivated successfully.
Jan 22 09:13:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:13:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:13:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 00837cd0-601a-432d-a32c-c3fb0f41fdee does not exist
Jan 22 09:13:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3d5daa4c-1c51-4162-969a-51a950b65f94 does not exist
Jan 22 09:13:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e2f834a0-a894-456a-bba3-24708b500a20 does not exist
Jan 22 09:13:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:13:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:18.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:18 np0005592157 nova_compute[245707]: 2026-01-22 14:13:18.995 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:19 np0005592157 nova_compute[245707]: 2026-01-22 14:13:18.997 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:19 np0005592157 nova_compute[245707]: 2026-01-22 14:13:18.997 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:19 np0005592157 nova_compute[245707]: 2026-01-22 14:13:18.997 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:19 np0005592157 nova_compute[245707]: 2026-01-22 14:13:19.041 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:19 np0005592157 nova_compute[245707]: 2026-01-22 14:13:19.042 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:19 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:20.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:20 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:21 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:21 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:22 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:23 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.043 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.045 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.045 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.045 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.122 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:24 np0005592157 nova_compute[245707]: 2026-01-22 14:13:24.123 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:24.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:24 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:25 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:25 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:26.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:27 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:28.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:28 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:29 np0005592157 nova_compute[245707]: 2026-01-22 14:13:29.124 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:29 np0005592157 nova_compute[245707]: 2026-01-22 14:13:29.126 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:29 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:30 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:30 np0005592157 podman[269480]: 2026-01-22 14:13:30.88068431 +0000 UTC m=+0.068527634 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 09:13:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:31 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:31 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:32.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:32 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:33 np0005592157 nova_compute[245707]: 2026-01-22 14:13:33.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:33 np0005592157 nova_compute[245707]: 2026-01-22 14:13:33.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:13:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:33 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.128 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.130 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.131 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.131 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.183 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:34 np0005592157 nova_compute[245707]: 2026-01-22 14:13:34.184 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:34.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:34 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:35 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:36 np0005592157 nova_compute[245707]: 2026-01-22 14:13:36.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:36.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:36.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:36 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:37 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:38 np0005592157 nova_compute[245707]: 2026-01-22 14:13:38.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:38.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:38 np0005592157 podman[269530]: 2026-01-22 14:13:38.397517056 +0000 UTC m=+0.117102778 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 09:13:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:38.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:38 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:39 np0005592157 nova_compute[245707]: 2026-01-22 14:13:39.185 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:39 np0005592157 nova_compute[245707]: 2026-01-22 14:13:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:39 np0005592157 nova_compute[245707]: 2026-01-22 14:13:39.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:39 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:40.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.401 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.402 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.402 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.403 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.403 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:40 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:13:40 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:13:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473123618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:13:40 np0005592157 nova_compute[245707]: 2026-01-22 14:13:40.859 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.088 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.090 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.091 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.091 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.206 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.207 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.207 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.207 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.207 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.271 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:13:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:13:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960319933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.755 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:13:41 np0005592157 nova_compute[245707]: 2026-01-22 14:13:41.763 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:13:41 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:42 np0005592157 nova_compute[245707]: 2026-01-22 14:13:42.020 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:13:42 np0005592157 nova_compute[245707]: 2026-01-22 14:13:42.024 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:13:42 np0005592157 nova_compute[245707]: 2026-01-22 14:13:42.024 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:42.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:42 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.020 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.021 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.236 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.237 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.237 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.324 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.324 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.324 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.324 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:13:43 np0005592157 nova_compute[245707]: 2026-01-22 14:13:43.325 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.187 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.190 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.190 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.191 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:44 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.230 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:44 np0005592157 nova_compute[245707]: 2026-01-22 14:13:44.231 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:13:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:45 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:45 np0005592157 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 09:13:46 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:46 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:13:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:47 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:13:47
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:13:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:13:47.583 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:13:47.584 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:13:47.584 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:48 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:48.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:49 np0005592157 nova_compute[245707]: 2026-01-22 14:13:49.232 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:49 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:50.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:50 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:51 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:51 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:52 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:53 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:54 np0005592157 nova_compute[245707]: 2026-01-22 14:13:54.233 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:54 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:55 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:56 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:57 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:58 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:13:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:13:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:13:59 np0005592157 nova_compute[245707]: 2026-01-22 14:13:59.235 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:13:59 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:00.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:00 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:01 np0005592157 podman[269664]: 2026-01-22 14:14:01.331731127 +0000 UTC m=+0.055017027 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:14:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:01 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:02 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:03 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004369903626930951 of space, bias 1.0, pg target 1.3109710880792853 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:14:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.237 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.239 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.239 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.239 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.240 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:04 np0005592157 nova_compute[245707]: 2026-01-22 14:14:04.242 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:05 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:06 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:06 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:06.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:07 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:08 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:08.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:09 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.243 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.245 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.245 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.245 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.260 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:09 np0005592157 nova_compute[245707]: 2026-01-22 14:14:09.262 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:09 np0005592157 podman[269689]: 2026-01-22 14:14:09.394665484 +0000 UTC m=+0.105491998 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:14:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:10 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:10.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:11 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:11 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:12.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:12 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:12 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:13 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:14 np0005592157 nova_compute[245707]: 2026-01-22 14:14:14.262 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:14 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:15 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:16.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:17 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:18 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:14:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:14:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:14:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:14:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:18.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:19 np0005592157 nova_compute[245707]: 2026-01-22 14:14:19.262 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:19 np0005592157 nova_compute[245707]: 2026-01-22 14:14:19.265 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8d830a1d-6f58-475b-9e59-e1d8a610b5c3 does not exist
Jan 22 09:14:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1191ffc3-b516-4034-b7e2-0d9dd4861808 does not exist
Jan 22 09:14:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eb189fa5-0108-4997-adc1-1749ce8136e9 does not exist
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:14:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:20.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.675030773 +0000 UTC m=+0.054994996 container create fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:14:20 np0005592157 systemd[1]: Started libpod-conmon-fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17.scope.
Jan 22 09:14:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.653445333 +0000 UTC m=+0.033409626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.758764476 +0000 UTC m=+0.138728729 container init fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.766178861 +0000 UTC m=+0.146143094 container start fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.770012997 +0000 UTC m=+0.149977260 container attach fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:14:20 np0005592157 vigilant_boyd[270059]: 167 167
Jan 22 09:14:20 np0005592157 systemd[1]: libpod-fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17.scope: Deactivated successfully.
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.773053383 +0000 UTC m=+0.153017616 container died fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:14:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4044a390fd9f207a8d9309564162911763bb690e2d38343c66d86edec156c5a7-merged.mount: Deactivated successfully.
Jan 22 09:14:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:20.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:20 np0005592157 podman[270043]: 2026-01-22 14:14:20.818698645 +0000 UTC m=+0.198662878 container remove fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:14:20 np0005592157 systemd[1]: libpod-conmon-fb50e8837a4a26a4120d2f63565f17effe91eddce69b53b16250c09791a79d17.scope: Deactivated successfully.
Jan 22 09:14:21 np0005592157 podman[270083]: 2026-01-22 14:14:21.013202697 +0000 UTC m=+0.048080383 container create c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:14:21 np0005592157 systemd[1]: Started libpod-conmon-c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12.scope.
Jan 22 09:14:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:21 np0005592157 podman[270083]: 2026-01-22 14:14:20.99329433 +0000 UTC m=+0.028172036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:21 np0005592157 podman[270083]: 2026-01-22 14:14:21.104840178 +0000 UTC m=+0.139717884 container init c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:14:21 np0005592157 podman[270083]: 2026-01-22 14:14:21.115883734 +0000 UTC m=+0.150761430 container start c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:14:21 np0005592157 podman[270083]: 2026-01-22 14:14:21.125525665 +0000 UTC m=+0.160403351 container attach c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:14:21 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:21 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:21 np0005592157 wizardly_galois[270099]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:14:21 np0005592157 wizardly_galois[270099]: --> relative data size: 1.0
Jan 22 09:14:21 np0005592157 wizardly_galois[270099]: --> All data devices are unavailable
Jan 22 09:14:22 np0005592157 systemd[1]: libpod-c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12.scope: Deactivated successfully.
Jan 22 09:14:22 np0005592157 podman[270083]: 2026-01-22 14:14:22.008201033 +0000 UTC m=+1.043078729 container died c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:14:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-154dc636a75d76b0df3b5bcc41a48201dcd7f4b3014c2b724ed1bedbd5e560e3-merged.mount: Deactivated successfully.
Jan 22 09:14:22 np0005592157 podman[270083]: 2026-01-22 14:14:22.090356907 +0000 UTC m=+1.125234593 container remove c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:14:22 np0005592157 systemd[1]: libpod-conmon-c5362f374c1174ba51f0f689fcb10237efb7ed44e586468878828e8f28fd5d12.scope: Deactivated successfully.
Jan 22 09:14:22 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.775149567 +0000 UTC m=+0.077381936 container create 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:14:22 np0005592157 systemd[1]: Started libpod-conmon-041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5.scope.
Jan 22 09:14:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:22.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.72007108 +0000 UTC m=+0.022303469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.84689442 +0000 UTC m=+0.149126789 container init 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.85487044 +0000 UTC m=+0.157102809 container start 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.858401618 +0000 UTC m=+0.160633987 container attach 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:14:22 np0005592157 hungry_gould[270282]: 167 167
Jan 22 09:14:22 np0005592157 systemd[1]: libpod-041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5.scope: Deactivated successfully.
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.86085657 +0000 UTC m=+0.163088939 container died 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:14:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4b8aafacc2b41066c90f4615ee6b69a6e12f3f1d8d5c8eef8e173cf8f96bf5b0-merged.mount: Deactivated successfully.
Jan 22 09:14:22 np0005592157 podman[270266]: 2026-01-22 14:14:22.899033514 +0000 UTC m=+0.201265873 container remove 041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:14:22 np0005592157 systemd[1]: libpod-conmon-041b62a8952a9ce212f72c4860f33f05d5c95e4bd30e7998630ad62e83cefdf5.scope: Deactivated successfully.
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.053555117 +0000 UTC m=+0.044241537 container create e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:14:23 np0005592157 systemd[1]: Started libpod-conmon-e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e.scope.
Jan 22 09:14:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff88629a1d61fdcdf01d10031f8b23ef611aeb7bc827f366d4a33a8abda4df9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff88629a1d61fdcdf01d10031f8b23ef611aeb7bc827f366d4a33a8abda4df9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff88629a1d61fdcdf01d10031f8b23ef611aeb7bc827f366d4a33a8abda4df9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.03487295 +0000 UTC m=+0.025559400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff88629a1d61fdcdf01d10031f8b23ef611aeb7bc827f366d4a33a8abda4df9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.14168132 +0000 UTC m=+0.132367740 container init e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.149473555 +0000 UTC m=+0.140159975 container start e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.152988733 +0000 UTC m=+0.143675153 container attach e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:14:23 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]: {
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:    "0": [
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:        {
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "devices": [
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "/dev/loop3"
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            ],
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "lv_name": "ceph_lv0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "lv_size": "7511998464",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "name": "ceph_lv0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "tags": {
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.cluster_name": "ceph",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.crush_device_class": "",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.encrypted": "0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.osd_id": "0",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.type": "block",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:                "ceph.vdo": "0"
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            },
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "type": "block",
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:            "vg_name": "ceph_vg0"
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:        }
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]:    ]
Jan 22 09:14:23 np0005592157 frosty_solomon[270323]: }
Jan 22 09:14:23 np0005592157 systemd[1]: libpod-e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e.scope: Deactivated successfully.
Jan 22 09:14:23 np0005592157 podman[270306]: 2026-01-22 14:14:23.949585839 +0000 UTC m=+0.940272259 container died e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:14:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ff88629a1d61fdcdf01d10031f8b23ef611aeb7bc827f366d4a33a8abda4df9a-merged.mount: Deactivated successfully.
Jan 22 09:14:24 np0005592157 podman[270306]: 2026-01-22 14:14:24.021652361 +0000 UTC m=+1.012338791 container remove e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:14:24 np0005592157 systemd[1]: libpod-conmon-e74666cd37f2fc38e17929ce5f2ff0e88a9bb91b12dc133ce44bd6ffba088c2e.scope: Deactivated successfully.
Jan 22 09:14:24 np0005592157 nova_compute[245707]: 2026-01-22 14:14:24.264 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:24 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.636318288 +0000 UTC m=+0.038613627 container create 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:14:24 np0005592157 systemd[1]: Started libpod-conmon-25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081.scope.
Jan 22 09:14:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.619069407 +0000 UTC m=+0.021364766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.722356289 +0000 UTC m=+0.124651658 container init 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.728429941 +0000 UTC m=+0.130725280 container start 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.732443461 +0000 UTC m=+0.134738820 container attach 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 09:14:24 np0005592157 competent_hamilton[270504]: 167 167
Jan 22 09:14:24 np0005592157 systemd[1]: libpod-25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081.scope: Deactivated successfully.
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.734155044 +0000 UTC m=+0.136450383 container died 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:14:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-847031bf50c81a06eab535bd53a8df2fe065977f116fad7da523f1dbbf7b4bf2-merged.mount: Deactivated successfully.
Jan 22 09:14:24 np0005592157 podman[270488]: 2026-01-22 14:14:24.772764169 +0000 UTC m=+0.175059508 container remove 25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hamilton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:14:24 np0005592157 systemd[1]: libpod-conmon-25709485cdf752dd0abccc63c4055c375fb78208a8f5c7b9a1494ffad408f081.scope: Deactivated successfully.
Jan 22 09:14:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:24.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:24 np0005592157 podman[270529]: 2026-01-22 14:14:24.933473407 +0000 UTC m=+0.040373990 container create bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:14:24 np0005592157 systemd[1]: Started libpod-conmon-bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474.scope.
Jan 22 09:14:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:14:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6edbd5b43da243c94f8717c29be9ea7a7fed40ddfd60247521c26ba9995385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6edbd5b43da243c94f8717c29be9ea7a7fed40ddfd60247521c26ba9995385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6edbd5b43da243c94f8717c29be9ea7a7fed40ddfd60247521c26ba9995385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:25 np0005592157 podman[270529]: 2026-01-22 14:14:24.917661172 +0000 UTC m=+0.024561775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:14:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6edbd5b43da243c94f8717c29be9ea7a7fed40ddfd60247521c26ba9995385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:14:25 np0005592157 podman[270529]: 2026-01-22 14:14:25.028129324 +0000 UTC m=+0.135029997 container init bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:14:25 np0005592157 podman[270529]: 2026-01-22 14:14:25.040902413 +0000 UTC m=+0.147802986 container start bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:14:25 np0005592157 podman[270529]: 2026-01-22 14:14:25.044310068 +0000 UTC m=+0.151210701 container attach bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:14:25 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]: {
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:        "osd_id": 0,
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:        "type": "bluestore"
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]:    }
Jan 22 09:14:25 np0005592157 naughty_jackson[270545]: }
Jan 22 09:14:25 np0005592157 systemd[1]: libpod-bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474.scope: Deactivated successfully.
Jan 22 09:14:25 np0005592157 podman[270529]: 2026-01-22 14:14:25.959408226 +0000 UTC m=+1.066308839 container died bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:14:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8a6edbd5b43da243c94f8717c29be9ea7a7fed40ddfd60247521c26ba9995385-merged.mount: Deactivated successfully.
Jan 22 09:14:26 np0005592157 podman[270529]: 2026-01-22 14:14:26.035648062 +0000 UTC m=+1.142548655 container remove bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:14:26 np0005592157 systemd[1]: libpod-conmon-bb878ff355f243372506efe35d542d125b06d1cc025840819348d6fdf5c29474.scope: Deactivated successfully.
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 904cc938-b7d7-4038-88e8-ce0e5337358b does not exist
Jan 22 09:14:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1e7ed76c-5220-47a1-96e5-46d1794e9710 does not exist
Jan 22 09:14:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3bf2b98d-61fb-4cca-8fdd-ee15d297d1a1 does not exist
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:26.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:27 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:14:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:14:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:28.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:29 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:29 np0005592157 nova_compute[245707]: 2026-01-22 14:14:29.267 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:30 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:31 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:31 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:14:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2581 syncs, 4.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1168 writes, 3346 keys, 1168 commit groups, 1.0 writes per commit group, ingest: 2.95 MB, 0.00 MB/s#012Interval WAL: 1168 writes, 504 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:14:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:32 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:32 np0005592157 podman[270684]: 2026-01-22 14:14:32.341880372 +0000 UTC m=+0.065505849 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:14:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:32.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:32.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:33 np0005592157 nova_compute[245707]: 2026-01-22 14:14:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:33 np0005592157 nova_compute[245707]: 2026-01-22 14:14:33.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.287001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273287184, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1193, "num_deletes": 251, "total_data_size": 1541023, "memory_usage": 1573584, "flush_reason": "Manual Compaction"}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273313905, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1505369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36725, "largest_seqno": 37917, "table_properties": {"data_size": 1500082, "index_size": 2555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13785, "raw_average_key_size": 20, "raw_value_size": 1488459, "raw_average_value_size": 2251, "num_data_blocks": 110, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091196, "oldest_key_time": 1769091196, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 27035 microseconds, and 6295 cpu microseconds.
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.314053) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1505369 bytes OK
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.314078) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.316381) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.316400) EVENT_LOG_v1 {"time_micros": 1769091273316394, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.316422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1535517, prev total WAL file size 1535517, number of live WAL files 2.
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.317330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1470KB)], [77(10MB)]
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273317381, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12306727, "oldest_snapshot_seqno": -1}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7970 keys, 10595749 bytes, temperature: kUnknown
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273410260, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10595749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10546916, "index_size": 27793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 212688, "raw_average_key_size": 26, "raw_value_size": 10405766, "raw_average_value_size": 1305, "num_data_blocks": 1075, "num_entries": 7970, "num_filter_entries": 7970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.410613) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10595749 bytes
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.412499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 114.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 8487, records dropped: 517 output_compression: NoCompression
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.412524) EVENT_LOG_v1 {"time_micros": 1769091273412513, "job": 44, "event": "compaction_finished", "compaction_time_micros": 92981, "compaction_time_cpu_micros": 56066, "output_level": 6, "num_output_files": 1, "total_output_size": 10595749, "num_input_records": 8487, "num_output_records": 7970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273413233, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273416818, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.317058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.416863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.416870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.416872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.416875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:14:33.416878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:34 np0005592157 nova_compute[245707]: 2026-01-22 14:14:34.269 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:34 np0005592157 nova_compute[245707]: 2026-01-22 14:14:34.271 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:34 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:34.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:35 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:36 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:36 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:37 np0005592157 nova_compute[245707]: 2026-01-22 14:14:37.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:37 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:38 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.271 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.273 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.273 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.274 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.287 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:39 np0005592157 nova_compute[245707]: 2026-01-22 14:14:39.288 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:39 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:40 np0005592157 podman[270708]: 2026-01-22 14:14:40.378150393 +0000 UTC m=+0.113345634 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:14:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:40.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:40 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:41 np0005592157 nova_compute[245707]: 2026-01-22 14:14:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:41 np0005592157 nova_compute[245707]: 2026-01-22 14:14:41.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:41 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:41 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.273 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.273 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.274 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.274 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.274 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.298 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.298 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.298 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.299 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.299 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:14:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:42.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:42 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997728244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.768 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:14:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:42.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.943 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.944 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4794MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.944 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:42 np0005592157 nova_compute[245707]: 2026-01-22 14:14:42.945 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.047 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.048 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.048 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.048 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.048 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.172 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:14:43 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:14:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571695261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.661 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.667 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.689 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.690 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:14:43 np0005592157 nova_compute[245707]: 2026-01-22 14:14:43.691 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:44 np0005592157 nova_compute[245707]: 2026-01-22 14:14:44.288 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:44.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:44 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:44 np0005592157 nova_compute[245707]: 2026-01-22 14:14:44.687 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:14:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:45 np0005592157 nova_compute[245707]: 2026-01-22 14:14:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:45 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:14:46 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:46 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:14:47
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:14:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:14:47.585 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:14:47.586 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:14:47.586 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:47 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:48 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:49 np0005592157 nova_compute[245707]: 2026-01-22 14:14:49.290 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:49 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:50 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:51 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:52 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:53 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:54 np0005592157 nova_compute[245707]: 2026-01-22 14:14:54.291 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:55 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:56 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:56 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:57 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:58 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:14:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:14:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.294 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.296 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.296 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.296 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.344 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:59 np0005592157 nova_compute[245707]: 2026-01-22 14:14:59.345 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:14:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:14:59 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:00 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:01 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:02 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:03 np0005592157 podman[270843]: 2026-01-22 14:15:03.330829983 +0000 UTC m=+0.058265547 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 09:15:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:03 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004369903626930951 of space, bias 1.0, pg target 1.3109710880792853 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:15:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:15:04 np0005592157 nova_compute[245707]: 2026-01-22 14:15:04.345 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:04.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:04 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:06 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:06 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:06.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:07 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:08 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:08.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:08.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:09 np0005592157 nova_compute[245707]: 2026-01-22 14:15:09.346 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:09 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:10 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:10.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:11 np0005592157 podman[270866]: 2026-01-22 14:15:11.340369706 +0000 UTC m=+0.081250332 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:15:11 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:11 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:12 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:12.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:13 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:14 np0005592157 nova_compute[245707]: 2026-01-22 14:15:14.348 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:14 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:14.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:14.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:15 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:15:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:16.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:15:16 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:16 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:17 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:18 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:19 np0005592157 nova_compute[245707]: 2026-01-22 14:15:19.350 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:19 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:19 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:20.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:20 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:22 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:22.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:22.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:23 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:24 np0005592157 nova_compute[245707]: 2026-01-22 14:15:24.352 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:24.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:24 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:25 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:26.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:15:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e7577a3c-c2df-4f18-9edf-671a704bb407 does not exist
Jan 22 09:15:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 19a9e1a5-0548-412c-821c-a2453d59d508 does not exist
Jan 22 09:15:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 05916ee8-2afa-4d9d-8ad9-917ba22f9120 does not exist
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:15:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:28.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.63236571 +0000 UTC m=+0.075127149 container create 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.584622857 +0000 UTC m=+0.027384346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:28 np0005592157 systemd[1]: Started libpod-conmon-8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707.scope.
Jan 22 09:15:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.818157435 +0000 UTC m=+0.260918904 container init 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.826778401 +0000 UTC m=+0.269539840 container start 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.831010047 +0000 UTC m=+0.273771506 container attach 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:15:28 np0005592157 busy_galois[271360]: 167 167
Jan 22 09:15:28 np0005592157 systemd[1]: libpod-8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707.scope: Deactivated successfully.
Jan 22 09:15:28 np0005592157 conmon[271360]: conmon 8e007b3a975e60d2a189 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707.scope/container/memory.events
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.834489294 +0000 UTC m=+0.277250733 container died 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:15:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-95f71f4aae88ebf78a388ace98abaf9e625ff8a83bf5546c3b7a7cbc49a23b57-merged.mount: Deactivated successfully.
Jan 22 09:15:28 np0005592157 podman[271344]: 2026-01-22 14:15:28.874065303 +0000 UTC m=+0.316826742 container remove 8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:15:28 np0005592157 systemd[1]: libpod-conmon-8e007b3a975e60d2a189acf1b77e88bf59c61442e137009679605bfdd3996707.scope: Deactivated successfully.
Jan 22 09:15:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:29 np0005592157 podman[271383]: 2026-01-22 14:15:29.037739895 +0000 UTC m=+0.042346280 container create bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:15:29 np0005592157 systemd[1]: Started libpod-conmon-bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17.scope.
Jan 22 09:15:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:29 np0005592157 podman[271383]: 2026-01-22 14:15:29.020116284 +0000 UTC m=+0.024722689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:29 np0005592157 podman[271383]: 2026-01-22 14:15:29.117862358 +0000 UTC m=+0.122468763 container init bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 22 09:15:29 np0005592157 podman[271383]: 2026-01-22 14:15:29.124770781 +0000 UTC m=+0.129377176 container start bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:15:29 np0005592157 podman[271383]: 2026-01-22 14:15:29.34751314 +0000 UTC m=+0.352119585 container attach bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:15:29 np0005592157 nova_compute[245707]: 2026-01-22 14:15:29.354 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:29 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:29 np0005592157 elated_villani[271399]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:15:29 np0005592157 elated_villani[271399]: --> relative data size: 1.0
Jan 22 09:15:29 np0005592157 elated_villani[271399]: --> All data devices are unavailable
Jan 22 09:15:29 np0005592157 systemd[1]: libpod-bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17.scope: Deactivated successfully.
Jan 22 09:15:30 np0005592157 podman[271415]: 2026-01-22 14:15:30.039078489 +0000 UTC m=+0.028055512 container died bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:15:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:30.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:31 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cc1be6b379a53569d732ccfad2d7c275d28c9d4e45959dc92ec50f41a641a3aa-merged.mount: Deactivated successfully.
Jan 22 09:15:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:32.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:32 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:32 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592157 podman[271415]: 2026-01-22 14:15:32.972824714 +0000 UTC m=+2.961801737 container remove bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:15:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:32 np0005592157 systemd[1]: libpod-conmon-bc647cdab985146a9faa36dee8d46c82d8a17321568cdcc31f8348b89c181e17.scope: Deactivated successfully.
Jan 22 09:15:33 np0005592157 nova_compute[245707]: 2026-01-22 14:15:33.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:33 np0005592157 nova_compute[245707]: 2026-01-22 14:15:33.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:15:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:33 np0005592157 podman[271625]: 2026-01-22 14:15:33.56187459 +0000 UTC m=+0.023320213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:33 np0005592157 podman[271625]: 2026-01-22 14:15:33.900061606 +0000 UTC m=+0.361507239 container create b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:15:33 np0005592157 systemd[1]: Started libpod-conmon-b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c.scope.
Jan 22 09:15:34 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:34 np0005592157 nova_compute[245707]: 2026-01-22 14:15:34.357 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:34 np0005592157 nova_compute[245707]: 2026-01-22 14:15:34.360 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:34.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:34 np0005592157 podman[271625]: 2026-01-22 14:15:34.504216759 +0000 UTC m=+0.965662442 container init b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:15:34 np0005592157 podman[271625]: 2026-01-22 14:15:34.515328937 +0000 UTC m=+0.976774540 container start b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:15:34 np0005592157 magical_jennings[271654]: 167 167
Jan 22 09:15:34 np0005592157 systemd[1]: libpod-b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c.scope: Deactivated successfully.
Jan 22 09:15:34 np0005592157 podman[271625]: 2026-01-22 14:15:34.601780428 +0000 UTC m=+1.063226051 container attach b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:15:34 np0005592157 podman[271625]: 2026-01-22 14:15:34.603953592 +0000 UTC m=+1.065399195 container died b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:15:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:35 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e32d6364ea4691df35b5f56509dcbfcf0575322692cbbc0b5831ef9878b99036-merged.mount: Deactivated successfully.
Jan 22 09:15:35 np0005592157 podman[271625]: 2026-01-22 14:15:35.132128906 +0000 UTC m=+1.593574509 container remove b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:15:35 np0005592157 podman[271639]: 2026-01-22 14:15:35.182588099 +0000 UTC m=+1.237542399 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:15:35 np0005592157 systemd[1]: libpod-conmon-b3a428e34eba72173d12e7ff1b970cec661a7532fb96fd66a2471812990bf30c.scope: Deactivated successfully.
Jan 22 09:15:35 np0005592157 podman[271685]: 2026-01-22 14:15:35.290255431 +0000 UTC m=+0.026219757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:35 np0005592157 podman[271685]: 2026-01-22 14:15:35.709739048 +0000 UTC m=+0.445703344 container create cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:15:35 np0005592157 systemd[1]: Started libpod-conmon-cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696.scope.
Jan 22 09:15:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92abb043e80c67348c2c0fa2b1f05ba4e522dfdb19881584f64301997554de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92abb043e80c67348c2c0fa2b1f05ba4e522dfdb19881584f64301997554de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92abb043e80c67348c2c0fa2b1f05ba4e522dfdb19881584f64301997554de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92abb043e80c67348c2c0fa2b1f05ba4e522dfdb19881584f64301997554de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:35 np0005592157 podman[271685]: 2026-01-22 14:15:35.995999605 +0000 UTC m=+0.731963981 container init cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:15:36 np0005592157 podman[271685]: 2026-01-22 14:15:36.004330783 +0000 UTC m=+0.740295079 container start cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:15:36 np0005592157 podman[271685]: 2026-01-22 14:15:36.040827326 +0000 UTC m=+0.776791652 container attach cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:15:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:36 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:15:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:15:36 np0005592157 infallible_villani[271703]: {
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:    "0": [
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:        {
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "devices": [
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "/dev/loop3"
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            ],
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "lv_name": "ceph_lv0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "lv_size": "7511998464",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "name": "ceph_lv0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "tags": {
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.cluster_name": "ceph",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.crush_device_class": "",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.encrypted": "0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.osd_id": "0",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.type": "block",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:                "ceph.vdo": "0"
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            },
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "type": "block",
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:            "vg_name": "ceph_vg0"
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:        }
Jan 22 09:15:36 np0005592157 infallible_villani[271703]:    ]
Jan 22 09:15:36 np0005592157 infallible_villani[271703]: }
Jan 22 09:15:36 np0005592157 systemd[1]: libpod-cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696.scope: Deactivated successfully.
Jan 22 09:15:36 np0005592157 podman[271685]: 2026-01-22 14:15:36.830607451 +0000 UTC m=+1.566571767 container died cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:15:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:37 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:37 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b92abb043e80c67348c2c0fa2b1f05ba4e522dfdb19881584f64301997554de9-merged.mount: Deactivated successfully.
Jan 22 09:15:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:37 np0005592157 podman[271685]: 2026-01-22 14:15:37.931504574 +0000 UTC m=+2.667468870 container remove cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:15:37 np0005592157 systemd[1]: libpod-conmon-cee5375080c1bfd5d0ee009deb653243ccf44fc39b0d5f4266257c191d069696.scope: Deactivated successfully.
Jan 22 09:15:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.578600282 +0000 UTC m=+0.079127350 container create 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.5205287 +0000 UTC m=+0.021055798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:38 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:38 np0005592157 systemd[1]: Started libpod-conmon-07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545.scope.
Jan 22 09:15:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.755615697 +0000 UTC m=+0.256142765 container init 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.763750341 +0000 UTC m=+0.264277409 container start 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:15:38 np0005592157 xenodochial_nobel[271876]: 167 167
Jan 22 09:15:38 np0005592157 systemd[1]: libpod-07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545.scope: Deactivated successfully.
Jan 22 09:15:38 np0005592157 conmon[271876]: conmon 07b204eadfcfaef2c672 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545.scope/container/memory.events
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.840665134 +0000 UTC m=+0.341192202 container attach 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:15:38 np0005592157 podman[271860]: 2026-01-22 14:15:38.841356611 +0000 UTC m=+0.341883679 container died 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:15:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-52a698e33a0223a269b2a65e3c08553793ad436102464876562f73f613338576-merged.mount: Deactivated successfully.
Jan 22 09:15:39 np0005592157 nova_compute[245707]: 2026-01-22 14:15:39.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:39 np0005592157 podman[271860]: 2026-01-22 14:15:39.306792537 +0000 UTC m=+0.807319595 container remove 07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:15:39 np0005592157 systemd[1]: libpod-conmon-07b204eadfcfaef2c672dd0854c739a582934646138448c21e01add44ca2c545.scope: Deactivated successfully.
Jan 22 09:15:39 np0005592157 nova_compute[245707]: 2026-01-22 14:15:39.362 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:39 np0005592157 nova_compute[245707]: 2026-01-22 14:15:39.364 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:39 np0005592157 podman[271902]: 2026-01-22 14:15:39.458405748 +0000 UTC m=+0.026708139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:15:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:39 np0005592157 podman[271902]: 2026-01-22 14:15:39.610911 +0000 UTC m=+0.179213361 container create 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:15:39 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592157 systemd[1]: Started libpod-conmon-53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3.scope.
Jan 22 09:15:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:15:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86c0f33e3b616d13ee2017d506dbfe13b4cd80924fc094b478c4732e6af2b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86c0f33e3b616d13ee2017d506dbfe13b4cd80924fc094b478c4732e6af2b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86c0f33e3b616d13ee2017d506dbfe13b4cd80924fc094b478c4732e6af2b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86c0f33e3b616d13ee2017d506dbfe13b4cd80924fc094b478c4732e6af2b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:15:39 np0005592157 podman[271902]: 2026-01-22 14:15:39.881206498 +0000 UTC m=+0.449508929 container init 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:15:39 np0005592157 podman[271902]: 2026-01-22 14:15:39.891258569 +0000 UTC m=+0.459560930 container start 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:15:39 np0005592157 podman[271902]: 2026-01-22 14:15:39.899901486 +0000 UTC m=+0.468203847 container attach 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:15:40 np0005592157 nova_compute[245707]: 2026-01-22 14:15:40.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:40 np0005592157 nova_compute[245707]: 2026-01-22 14:15:40.246 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:15:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]: {
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:        "osd_id": 0,
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:        "type": "bluestore"
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]:    }
Jan 22 09:15:40 np0005592157 vigilant_beaver[271919]: }
Jan 22 09:15:40 np0005592157 systemd[1]: libpod-53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3.scope: Deactivated successfully.
Jan 22 09:15:40 np0005592157 podman[271902]: 2026-01-22 14:15:40.798495941 +0000 UTC m=+1.366798302 container died 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:15:40 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e86c0f33e3b616d13ee2017d506dbfe13b4cd80924fc094b478c4732e6af2b63-merged.mount: Deactivated successfully.
Jan 22 09:15:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:41 np0005592157 podman[271902]: 2026-01-22 14:15:41.404629115 +0000 UTC m=+1.972931476 container remove 53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:15:41 np0005592157 systemd[1]: libpod-conmon-53aa4c4e1cf8bba99143c87c7cb97de91f14eac43c65fb2c268c472eb70881b3.scope: Deactivated successfully.
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:15:41 np0005592157 podman[271954]: 2026-01-22 14:15:41.532940202 +0000 UTC m=+0.089589080 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:15:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:15:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d62d493a-b37f-4ba1-bcbc-0351b2daa400 does not exist
Jan 22 09:15:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3746bde8-72be-49ef-a2f8-717fb0449d22 does not exist
Jan 22 09:15:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 27694ecc-1b5e-426d-8155-08bbca3d6588 does not exist
Jan 22 09:15:41 np0005592157 nova_compute[245707]: 2026-01-22 14:15:41.826 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:41 np0005592157 nova_compute[245707]: 2026-01-22 14:15:41.826 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:41 np0005592157 nova_compute[245707]: 2026-01-22 14:15:41.827 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:42 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:42 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.239 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.240 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.296 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.297 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.297 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.312 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.313 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.313 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592157 nova_compute[245707]: 2026-01-22 14:15:43.313 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:15:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:43 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.347 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.347 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.348 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.348 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.348 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.367 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.369 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:15:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191107956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:15:44 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.788 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.967 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.968 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.969 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:44 np0005592157 nova_compute[245707]: 2026-01-22 14:15:44.969 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.179 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.180 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.180 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.180 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.180 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.231 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing inventories for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.289 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating ProviderTree inventory for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.289 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating inventory in ProviderTree for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.308 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing aggregate associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.330 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing trait associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.389 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:15:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:45 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:15:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3975816590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.839 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.845 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.874 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.875 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:15:45 np0005592157 nova_compute[245707]: 2026-01-22 14:15:45.875 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:46 np0005592157 nova_compute[245707]: 2026-01-22 14:15:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:46.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:15:46 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:46 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:46.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:15:47
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', 'images']
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:15:47 np0005592157 nova_compute[245707]: 2026-01-22 14:15:47.541 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:15:47.586 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:15:47.587 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:15:47.587 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:47 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:48 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:49.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:49 np0005592157 nova_compute[245707]: 2026-01-22 14:15:49.366 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:49 np0005592157 nova_compute[245707]: 2026-01-22 14:15:49.368 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:50 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:50.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:51 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:52.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:52 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:52 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:54 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:54 np0005592157 nova_compute[245707]: 2026-01-22 14:15:54.368 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:54 np0005592157 nova_compute[245707]: 2026-01-22 14:15:54.370 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:54.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:55.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592157 nova_compute[245707]: 2026-01-22 14:15:55.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:55 np0005592157 nova_compute[245707]: 2026-01-22 14:15:55.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:15:55 np0005592157 nova_compute[245707]: 2026-01-22 14:15:55.269 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:15:55 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:56 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:56 np0005592157 nova_compute[245707]: 2026-01-22 14:15:56.976 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:57 np0005592157 nova_compute[245707]: 2026-01-22 14:15:57.004 245711 WARNING nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 22 09:15:57 np0005592157 nova_compute[245707]: 2026-01-22 14:15:57.005 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Sync already in progress for 18becd7f-5901-49d8-87eb-548e630001aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:15:57 np0005592157 nova_compute[245707]: 2026-01-22 14:15:57.005 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Sync already in progress for 1089392f-9bda-4904-9370-95fc2c3dd7c2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:15:57 np0005592157 nova_compute[245707]: 2026-01-22 14:15:57.005 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid b8bec212-84ad-47fd-9608-2cc1999da6c4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:15:57 np0005592157 nova_compute[245707]: 2026-01-22 14:15:57.006 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "b8bec212-84ad-47fd-9608-2cc1999da6c4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:15:57 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:57 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:15:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:15:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:15:59 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:59 np0005592157 nova_compute[245707]: 2026-01-22 14:15:59.371 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:15:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:00 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:00.411 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:16:00 np0005592157 nova_compute[245707]: 2026-01-22 14:16:00.412 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:00 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:00.413 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:16:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:00.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:00 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:01 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:02.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:03 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004369903626930951 of space, bias 1.0, pg target 1.3109710880792853 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:16:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:16:04 np0005592157 nova_compute[245707]: 2026-01-22 14:16:04.372 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:04.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:04 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:05 np0005592157 podman[272135]: 2026-01-22 14:16:05.374225053 +0000 UTC m=+0.091975081 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:16:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:05 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:06.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:07.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:07 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:08 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:08.416 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:16:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:08 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:09.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:09 np0005592157 nova_compute[245707]: 2026-01-22 14:16:09.374 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:10 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:11.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.790868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371790994, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1351, "num_deletes": 251, "total_data_size": 1855809, "memory_usage": 1883752, "flush_reason": "Manual Compaction"}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371849916, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1150158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37918, "largest_seqno": 39268, "table_properties": {"data_size": 1145332, "index_size": 2030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14943, "raw_average_key_size": 21, "raw_value_size": 1133865, "raw_average_value_size": 1652, "num_data_blocks": 88, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091274, "oldest_key_time": 1769091274, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 59156 microseconds, and 4373 cpu microseconds.
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.850033) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1150158 bytes OK
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.850058) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.854679) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.854701) EVENT_LOG_v1 {"time_micros": 1769091371854694, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.854722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1849689, prev total WAL file size 1849689, number of live WAL files 2.
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.855617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1123KB)], [80(10MB)]
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371855730, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 11745907, "oldest_snapshot_seqno": -1}
Jan 22 09:16:11 np0005592157 podman[272161]: 2026-01-22 14:16:11.907870777 +0000 UTC m=+0.086521124 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 8184 keys, 8509380 bytes, temperature: kUnknown
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371957515, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8509380, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8462965, "index_size": 24863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 218023, "raw_average_key_size": 26, "raw_value_size": 8321852, "raw_average_value_size": 1016, "num_data_blocks": 953, "num_entries": 8184, "num_filter_entries": 8184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.958199) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8509380 bytes
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.960363) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.9 rd, 83.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.6) write-amplify(7.4) OK, records in: 8656, records dropped: 472 output_compression: NoCompression
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.960411) EVENT_LOG_v1 {"time_micros": 1769091371960393, "job": 46, "event": "compaction_finished", "compaction_time_micros": 102263, "compaction_time_cpu_micros": 46654, "output_level": 6, "num_output_files": 1, "total_output_size": 8509380, "num_input_records": 8656, "num_output_records": 8184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371960907, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371963488, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.855512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.963564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.963569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.963571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.963572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:16:11.963573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:13.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:13 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:14 np0005592157 nova_compute[245707]: 2026-01-22 14:16:14.376 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:14 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:15.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:16 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:16.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:17.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:17 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:17 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:18.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:18 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:18 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:19 np0005592157 nova_compute[245707]: 2026-01-22 14:16:19.378 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:16:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:21.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:22 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:24 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:24 np0005592157 nova_compute[245707]: 2026-01-22 14:16:24.380 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:24.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:16:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:16:25 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:27 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:28 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:28 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:28 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:29 np0005592157 nova_compute[245707]: 2026-01-22 14:16:29.382 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:29 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:29 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:31 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:31.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:32 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:32 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:33 np0005592157 nova_compute[245707]: 2026-01-22 14:16:33.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:33 np0005592157 nova_compute[245707]: 2026-01-22 14:16:33.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:16:33 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:34 np0005592157 nova_compute[245707]: 2026-01-22 14:16:34.384 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:34 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:36 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592157 podman[272300]: 2026-01-22 14:16:36.376085159 +0000 UTC m=+0.099042277 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 09:16:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:37 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:37 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:38.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:38 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:38 np0005592157 ovn_controller[146940]: 2026-01-22T14:16:38Z|00036|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:16:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:39 np0005592157 nova_compute[245707]: 2026-01-22 14:16:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:39 np0005592157 nova_compute[245707]: 2026-01-22 14:16:39.386 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:16:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:39 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:41.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:41 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592157 nova_compute[245707]: 2026-01-22 14:16:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:41 np0005592157 nova_compute[245707]: 2026-01-22 14:16:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:42 np0005592157 podman[272346]: 2026-01-22 14:16:42.250033592 +0000 UTC m=+0.111801825 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 09:16:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:42 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:43.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.262 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.263 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.263 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.263 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:16:43 np0005592157 nova_compute[245707]: 2026-01-22 14:16:43.263 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 19b57ff1-7ef0-47d0-ab76-5021f8a4e840 does not exist
Jan 22 09:16:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dd70a236-cb92-4458-a1c3-24309db5c5c5 does not exist
Jan 22 09:16:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d9e3733e-0aa4-4823-80d8-de533d34d6fc does not exist
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:16:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.258 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.387 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.388 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.388 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.388 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.389 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:16:44 np0005592157 nova_compute[245707]: 2026-01-22 14:16:44.390 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.506367643 +0000 UTC m=+0.054193166 container create 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 22 09:16:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:44.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:44 np0005592157 systemd[1]: Started libpod-conmon-684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf.scope.
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.474000404 +0000 UTC m=+0.021825947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.832900936 +0000 UTC m=+0.380726569 container init 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.844017994 +0000 UTC m=+0.391843517 container start 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:16:44 np0005592157 recursing_wiles[272636]: 167 167
Jan 22 09:16:44 np0005592157 systemd[1]: libpod-684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf.scope: Deactivated successfully.
Jan 22 09:16:44 np0005592157 conmon[272636]: conmon 684b30a7ead54d47e19c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf.scope/container/memory.events
Jan 22 09:16:44 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:16:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:16:44 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.962603839 +0000 UTC m=+0.510429362 container attach 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:16:44 np0005592157 podman[272620]: 2026-01-22 14:16:44.964485526 +0000 UTC m=+0.512311049 container died 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:16:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:45.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1c2b4ada38723bdfd1066d01260c0949181f36d876e39efff1da6c7c0c2bb70a-merged.mount: Deactivated successfully.
Jan 22 09:16:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:45 np0005592157 podman[272620]: 2026-01-22 14:16:45.736135658 +0000 UTC m=+1.283961181 container remove 684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wiles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:16:45 np0005592157 systemd[1]: libpod-conmon-684b30a7ead54d47e19cf55e1211366d022e326cb004d4445567b4ae949b2fbf.scope: Deactivated successfully.
Jan 22 09:16:45 np0005592157 podman[272662]: 2026-01-22 14:16:45.928175419 +0000 UTC m=+0.064441052 container create b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:16:45 np0005592157 systemd[1]: Started libpod-conmon-b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739.scope.
Jan 22 09:16:45 np0005592157 podman[272662]: 2026-01-22 14:16:45.890868236 +0000 UTC m=+0.027133959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:46 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:46 np0005592157 podman[272662]: 2026-01-22 14:16:46.032549668 +0000 UTC m=+0.168815321 container init b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:16:46 np0005592157 podman[272662]: 2026-01-22 14:16:46.040721312 +0000 UTC m=+0.176986945 container start b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:16:46 np0005592157 podman[272662]: 2026-01-22 14:16:46.046142858 +0000 UTC m=+0.182408491 container attach b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.266 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.266 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.266 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.267 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.267 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:16:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:46.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:16:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/353905065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.727 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.886 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.887 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4738MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.888 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.888 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:46 np0005592157 vibrant_heisenberg[272678]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:16:46 np0005592157 vibrant_heisenberg[272678]: --> relative data size: 1.0
Jan 22 09:16:46 np0005592157 vibrant_heisenberg[272678]: --> All data devices are unavailable
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.976 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.976 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.976 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.976 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:16:46 np0005592157 nova_compute[245707]: 2026-01-22 14:16:46.977 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:16:46 np0005592157 systemd[1]: libpod-b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739.scope: Deactivated successfully.
Jan 22 09:16:47 np0005592157 podman[272715]: 2026-01-22 14:16:47.028309393 +0000 UTC m=+0.026193396 container died b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:16:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9f8e9844c47fa42e2eab87c92327841718ee9257d96bf100857e386ff5a29454-merged.mount: Deactivated successfully.
Jan 22 09:16:47 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.079 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:16:47 np0005592157 podman[272715]: 2026-01-22 14:16:47.089471182 +0000 UTC m=+0.087355165 container remove b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:16:47 np0005592157 systemd[1]: libpod-conmon-b1cd66562b497e0f8300773e9c8d6d3f1459bf3387340b94f5a06ce45434e739.scope: Deactivated successfully.
Jan 22 09:16:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:47.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:16:47
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta']
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:16:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:16:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336408558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.530 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.537 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.554 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.556 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:16:47 np0005592157 nova_compute[245707]: 2026-01-22 14:16:47.556 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:47.588 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:47.588 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:16:47.588 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.72176766 +0000 UTC m=+0.051310674 container create 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:16:47 np0005592157 systemd[1]: Started libpod-conmon-29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f.scope.
Jan 22 09:16:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.691898293 +0000 UTC m=+0.021441337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.799132774 +0000 UTC m=+0.128675818 container init 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.80695861 +0000 UTC m=+0.136501624 container start 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:16:47 np0005592157 systemd[1]: libpod-29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f.scope: Deactivated successfully.
Jan 22 09:16:47 np0005592157 hungry_pasteur[272909]: 167 167
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.811977246 +0000 UTC m=+0.141520300 container attach 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:16:47 np0005592157 conmon[272909]: conmon 29d7c165f365e6bd2fde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f.scope/container/memory.events
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.812598941 +0000 UTC m=+0.142141965 container died 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:16:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e458c3ac054cf4240b0946ae9629ca3d3fb25f81b6b63f99b0f44cbc19fca53a-merged.mount: Deactivated successfully.
Jan 22 09:16:47 np0005592157 podman[272892]: 2026-01-22 14:16:47.913730069 +0000 UTC m=+0.243273073 container remove 29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 22 09:16:47 np0005592157 systemd[1]: libpod-conmon-29d7c165f365e6bd2fde4f2507faef7004da286e3e90e18027f453074a12703f.scope: Deactivated successfully.
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.081874023 +0000 UTC m=+0.045586931 container create 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:16:48 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:48 np0005592157 systemd[1]: Started libpod-conmon-5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54.scope.
Jan 22 09:16:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.060998321 +0000 UTC m=+0.024711259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea724ded139f0c6cc05f7a74e9af496badae2cc8c6908b1c4b173b6f9cfac6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea724ded139f0c6cc05f7a74e9af496badae2cc8c6908b1c4b173b6f9cfac6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea724ded139f0c6cc05f7a74e9af496badae2cc8c6908b1c4b173b6f9cfac6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea724ded139f0c6cc05f7a74e9af496badae2cc8c6908b1c4b173b6f9cfac6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.17096802 +0000 UTC m=+0.134680948 container init 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.178499429 +0000 UTC m=+0.142212347 container start 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.19375356 +0000 UTC m=+0.157466468 container attach 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 09:16:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:48.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:48 np0005592157 recursing_booth[272949]: {
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:    "0": [
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:        {
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "devices": [
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "/dev/loop3"
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            ],
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "lv_name": "ceph_lv0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "lv_size": "7511998464",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "name": "ceph_lv0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "tags": {
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.cluster_name": "ceph",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.crush_device_class": "",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.encrypted": "0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.osd_id": "0",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.type": "block",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:                "ceph.vdo": "0"
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            },
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "type": "block",
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:            "vg_name": "ceph_vg0"
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:        }
Jan 22 09:16:48 np0005592157 recursing_booth[272949]:    ]
Jan 22 09:16:48 np0005592157 recursing_booth[272949]: }
Jan 22 09:16:48 np0005592157 systemd[1]: libpod-5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54.scope: Deactivated successfully.
Jan 22 09:16:48 np0005592157 podman[272932]: 2026-01-22 14:16:48.964748096 +0000 UTC m=+0.928461004 container died 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:16:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-eea724ded139f0c6cc05f7a74e9af496badae2cc8c6908b1c4b173b6f9cfac6c-merged.mount: Deactivated successfully.
Jan 22 09:16:49 np0005592157 podman[272932]: 2026-01-22 14:16:49.03051597 +0000 UTC m=+0.994228878 container remove 5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_booth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:16:49 np0005592157 systemd[1]: libpod-conmon-5b8704aa24ea0e2fe66660a88976ede7c8c1f914d09f61d8c5c8e5fc62cd5f54.scope: Deactivated successfully.
Jan 22 09:16:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:49.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:49 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:49 np0005592157 nova_compute[245707]: 2026-01-22 14:16:49.391 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:49 np0005592157 nova_compute[245707]: 2026-01-22 14:16:49.557 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.692026637 +0000 UTC m=+0.088686588 container create 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.629266848 +0000 UTC m=+0.025926819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:49 np0005592157 systemd[1]: Started libpod-conmon-4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2.scope.
Jan 22 09:16:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.770820097 +0000 UTC m=+0.167480068 container init 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.776639113 +0000 UTC m=+0.173299064 container start 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.7801175 +0000 UTC m=+0.176777451 container attach 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:16:49 np0005592157 wonderful_torvalds[273126]: 167 167
Jan 22 09:16:49 np0005592157 systemd[1]: libpod-4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2.scope: Deactivated successfully.
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.783595087 +0000 UTC m=+0.180255068 container died 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:16:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a32ea8ae8e8b66a641a488a0ae2cc1214dd3c53888183e5140a7546d83e6410c-merged.mount: Deactivated successfully.
Jan 22 09:16:49 np0005592157 podman[273110]: 2026-01-22 14:16:49.872845098 +0000 UTC m=+0.269505049 container remove 4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:16:49 np0005592157 systemd[1]: libpod-conmon-4fe0d49c8fb86bbca21576c1cd3e6ac51bf757a5ba0ccfb0f91be8b5d1daf0d2.scope: Deactivated successfully.
Jan 22 09:16:50 np0005592157 podman[273152]: 2026-01-22 14:16:50.039849353 +0000 UTC m=+0.044992826 container create 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:16:50 np0005592157 systemd[1]: Started libpod-conmon-1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0.scope.
Jan 22 09:16:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:16:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98211a39c487c409a522078529d5994bcaea1229cf35e1a3d952ff8596a322ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98211a39c487c409a522078529d5994bcaea1229cf35e1a3d952ff8596a322ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98211a39c487c409a522078529d5994bcaea1229cf35e1a3d952ff8596a322ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98211a39c487c409a522078529d5994bcaea1229cf35e1a3d952ff8596a322ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:16:50 np0005592157 podman[273152]: 2026-01-22 14:16:50.017618787 +0000 UTC m=+0.022762280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:16:50 np0005592157 podman[273152]: 2026-01-22 14:16:50.113550036 +0000 UTC m=+0.118693539 container init 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:16:50 np0005592157 podman[273152]: 2026-01-22 14:16:50.121652108 +0000 UTC m=+0.126795581 container start 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:16:50 np0005592157 podman[273152]: 2026-01-22 14:16:50.125823993 +0000 UTC m=+0.130967466 container attach 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:16:50 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]: {
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:        "osd_id": 0,
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:        "type": "bluestore"
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]:    }
Jan 22 09:16:51 np0005592157 bold_pasteur[273168]: }
Jan 22 09:16:51 np0005592157 systemd[1]: libpod-1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0.scope: Deactivated successfully.
Jan 22 09:16:51 np0005592157 podman[273152]: 2026-01-22 14:16:51.043408533 +0000 UTC m=+1.048552006 container died 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 09:16:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-98211a39c487c409a522078529d5994bcaea1229cf35e1a3d952ff8596a322ba-merged.mount: Deactivated successfully.
Jan 22 09:16:51 np0005592157 podman[273152]: 2026-01-22 14:16:51.10647317 +0000 UTC m=+1.111616643 container remove 1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:16:51 np0005592157 systemd[1]: libpod-conmon-1e7b19b7202a90bb721b4cb03edb102fd5cdac4f247beed1f38e2c07ad61c4f0.scope: Deactivated successfully.
Jan 22 09:16:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4d3f0f8e-b693-4f8b-886f-0cfbfa7b4df4 does not exist
Jan 22 09:16:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 222793c8-1f2c-407c-afca-c39c0d58ca19 does not exist
Jan 22 09:16:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 403fc951-8bb6-425e-aceb-0e3729ad3464 does not exist
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:52 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:52 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:53.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:53 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:54 np0005592157 nova_compute[245707]: 2026-01-22 14:16:54.393 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:54 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:55.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:55 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:56 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:16:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:16:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:16:57 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:57 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:16:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:16:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:59.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:16:59 np0005592157 nova_compute[245707]: 2026-01-22 14:16:59.396 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:16:59 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 09:16:59 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 22 09:16:59 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 22 09:16:59 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 09:17:00 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 22 09:17:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:01.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 09:17:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:17:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:02.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:17:02 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:03.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004369903626930951 of space, bias 1.0, pg target 1.3109710880792853 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:17:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:17:04 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:04 np0005592157 nova_compute[245707]: 2026-01-22 14:17:04.397 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:05.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:05 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:05 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 09:17:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:06.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:06 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:17:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:17:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:07.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:17:07 np0005592157 podman[273311]: 2026-01-22 14:17:07.331879036 +0000 UTC m=+0.064841213 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 09:17:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 09:17:08 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:09 np0005592157 nova_compute[245707]: 2026-01-22 14:17:09.328 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:09.328 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:17:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:09.331 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:17:09 np0005592157 nova_compute[245707]: 2026-01-22 14:17:09.400 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:09 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 22 09:17:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:10 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 09:17:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:11 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:12.335 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:17:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:12 np0005592157 podman[273357]: 2026-01-22 14:17:12.689599637 +0000 UTC m=+0.094135784 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:17:12 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:12 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:13.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Jan 22 09:17:13 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.401 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.403 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.403 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.403 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.446 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:14 np0005592157 nova_compute[245707]: 2026-01-22 14:17:14.447 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:17:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:14 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:15.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:16 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:17.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 09:17:17 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:17 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:17:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:18.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:18 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:19 np0005592157 nova_compute[245707]: 2026-01-22 14:17:19.448 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:19 np0005592157 nova_compute[245707]: 2026-01-22 14:17:19.450 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:17:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 09:17:19 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:21 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Jan 22 09:17:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:22 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:17:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:22.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:17:22 np0005592157 nova_compute[245707]: 2026-01-22 14:17:22.663 245711 DEBUG oslo_concurrency.lockutils [None req-6e1dc630-7a6a-409a-8b8c-9181465ae7f2 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "b8bec212-84ad-47fd-9608-2cc1999da6c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:23.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 09:17:23 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:23 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592157 nova_compute[245707]: 2026-01-22 14:17:24.449 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:24 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:25.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Jan 22 09:17:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:26 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:28 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:29 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:29.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:29 np0005592157 nova_compute[245707]: 2026-01-22 14:17:29.451 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:30 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:31.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:31 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:32 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:32 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:33.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:33 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.280528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454280633, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1139, "num_deletes": 251, "total_data_size": 1591515, "memory_usage": 1621232, "flush_reason": "Manual Compaction"}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454296063, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1568060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39269, "largest_seqno": 40407, "table_properties": {"data_size": 1562724, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13558, "raw_average_key_size": 21, "raw_value_size": 1551283, "raw_average_value_size": 2405, "num_data_blocks": 111, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091372, "oldest_key_time": 1769091372, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15574 microseconds, and 5803 cpu microseconds.
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.296115) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1568060 bytes OK
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.296138) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298417) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298436) EVENT_LOG_v1 {"time_micros": 1769091454298429, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1586135, prev total WAL file size 1586135, number of live WAL files 2.
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299148) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1531KB)], [83(8309KB)]
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454299223, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10077440, "oldest_snapshot_seqno": -1}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 8312 keys, 8450186 bytes, temperature: kUnknown
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454366840, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8450186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403070, "index_size": 25244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 222011, "raw_average_key_size": 26, "raw_value_size": 8259687, "raw_average_value_size": 993, "num_data_blocks": 963, "num_entries": 8312, "num_filter_entries": 8312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.367217) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8450186 bytes
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.368970) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 8829, records dropped: 517 output_compression: NoCompression
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.368994) EVENT_LOG_v1 {"time_micros": 1769091454368982, "job": 48, "event": "compaction_finished", "compaction_time_micros": 67805, "compaction_time_cpu_micros": 22994, "output_level": 6, "num_output_files": 1, "total_output_size": 8450186, "num_input_records": 8829, "num_output_records": 8312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454369439, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454371330, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.371400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.371408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.371409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.371410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:17:34.371412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592157 nova_compute[245707]: 2026-01-22 14:17:34.451 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:34.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:35 np0005592157 nova_compute[245707]: 2026-01-22 14:17:35.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:35 np0005592157 nova_compute[245707]: 2026-01-22 14:17:35.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:17:35 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:36 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:37.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:37 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:37 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:37 np0005592157 podman[273474]: 2026-01-22 14:17:37.872588444 +0000 UTC m=+0.075314542 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 09:17:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:38.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:39 np0005592157 nova_compute[245707]: 2026-01-22 14:17:39.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:39 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592157 nova_compute[245707]: 2026-01-22 14:17:39.453 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:40 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:40.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:41 np0005592157 nova_compute[245707]: 2026-01-22 14:17:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:41 np0005592157 nova_compute[245707]: 2026-01-22 14:17:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:42 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:43.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:43 np0005592157 nova_compute[245707]: 2026-01-22 14:17:43.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:43 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:43 np0005592157 podman[273495]: 2026-01-22 14:17:43.345969912 +0000 UTC m=+0.087114896 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:17:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.239 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.454 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.455 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.456 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.456 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.456 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:17:44 np0005592157 nova_compute[245707]: 2026-01-22 14:17:44.457 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:44 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:45.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.306 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:45 np0005592157 nova_compute[245707]: 2026-01-22 14:17:45.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:17:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:45 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.281 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.281 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.281 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.282 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.282 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:17:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:17:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921556699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.711 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:17:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.871 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.873 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4810MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.873 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:46 np0005592157 nova_compute[245707]: 2026-01-22 14:17:46.874 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:47 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:17:47
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups']
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.491 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.491 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.491 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.492 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.492 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.562 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:47.589 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:47.589 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:17:47.590 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:17:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359774298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:17:47 np0005592157 nova_compute[245707]: 2026-01-22 14:17:47.997 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:17:48 np0005592157 nova_compute[245707]: 2026-01-22 14:17:48.004 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:17:48 np0005592157 nova_compute[245707]: 2026-01-22 14:17:48.025 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:17:48 np0005592157 nova_compute[245707]: 2026-01-22 14:17:48.059 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:17:48 np0005592157 nova_compute[245707]: 2026-01-22 14:17:48.060 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:48 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:49 np0005592157 nova_compute[245707]: 2026-01-22 14:17:49.457 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:49 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:50 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:52 np0005592157 nova_compute[245707]: 2026-01-22 14:17:52.060 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:17:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:54 np0005592157 nova_compute[245707]: 2026-01-22 14:17:54.459 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:54.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:54 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:56.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:17:57 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:17:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:17:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:58 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:58 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:17:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:17:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:17:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:59 np0005592157 nova_compute[245707]: 2026-01-22 14:17:59.461 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:17:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.507 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.507 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.529 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:18:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:00.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.617 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.618 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.623 245711 DEBUG nova.virt.hardware [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.624 245711 INFO nova.compute.claims [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3054d72f-180b-4745-8f1b-842076bc7e40 does not exist
Jan 22 09:18:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ba5c7131-d466-49c6-b87e-8aa9d633af83 does not exist
Jan 22 09:18:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev db59e671-80ab-4f88-a761-29943a8c3aec does not exist
Jan 22 09:18:00 np0005592157 nova_compute[245707]: 2026-01-22 14:18:00.772 245711 DEBUG oslo_concurrency.processutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:18:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:18:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882272731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.466 245711 DEBUG oslo_concurrency.processutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:01 np0005592157 podman[273917]: 2026-01-22 14:18:01.469453658 +0000 UTC m=+0.096006189 container create bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.472 245711 DEBUG nova.compute.provider_tree [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:18:01 np0005592157 podman[273917]: 2026-01-22 14:18:01.400101281 +0000 UTC m=+0.026653832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.506 245711 DEBUG nova.scheduler.client.report [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.537 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.539 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.593 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.593 245711 DEBUG nova.network.neutron [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:18:01 np0005592157 systemd[1]: Started libpod-conmon-bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796.scope.
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.617 245711 INFO nova.virt.libvirt.driver [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:18:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.639 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:18:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.734 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.735 245711 DEBUG nova.virt.libvirt.driver [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.735 245711 INFO nova.virt.libvirt.driver [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Creating image(s)#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.767 245711 DEBUG nova.storage.rbd_utils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.799 245711 DEBUG nova.storage.rbd_utils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.831 245711 DEBUG nova.storage.rbd_utils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.834 245711 DEBUG oslo_concurrency.processutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:01 np0005592157 podman[273917]: 2026-01-22 14:18:01.854792433 +0000 UTC m=+0.481344974 container init bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:18:01 np0005592157 podman[273917]: 2026-01-22 14:18:01.863675806 +0000 UTC m=+0.490228327 container start bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:18:01 np0005592157 reverent_varahamihira[273935]: 167 167
Jan 22 09:18:01 np0005592157 systemd[1]: libpod-bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796.scope: Deactivated successfully.
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.898 245711 DEBUG oslo_concurrency.processutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.900 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.900 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.901 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.923 245711 DEBUG nova.storage.rbd_utils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:18:01 np0005592157 nova_compute[245707]: 2026-01-22 14:18:01.926 245711 DEBUG oslo_concurrency.processutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:02 np0005592157 podman[273917]: 2026-01-22 14:18:02.091069533 +0000 UTC m=+0.717622084 container attach bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:18:02 np0005592157 podman[273917]: 2026-01-22 14:18:02.091551365 +0000 UTC m=+0.718103906 container died bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:18:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ce42c47d7b0b55873febd73ee3ceae6c382213880edb80ea5ddcf05ec7b1d96e-merged.mount: Deactivated successfully.
Jan 22 09:18:02 np0005592157 podman[273917]: 2026-01-22 14:18:02.435913824 +0000 UTC m=+1.062466355 container remove bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_varahamihira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:18:02 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:02 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:02 np0005592157 systemd[1]: libpod-conmon-bd725410dfe483711908f6564ac4b1bf24fe76762c69e9d6fd641b7a26a10796.scope: Deactivated successfully.
Jan 22 09:18:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:02 np0005592157 podman[274054]: 2026-01-22 14:18:02.58215143 +0000 UTC m=+0.025394232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:02 np0005592157 podman[274054]: 2026-01-22 14:18:02.692789639 +0000 UTC m=+0.136032421 container create 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:18:02 np0005592157 systemd[1]: Started libpod-conmon-185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1.scope.
Jan 22 09:18:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:02 np0005592157 podman[274054]: 2026-01-22 14:18:02.889278533 +0000 UTC m=+0.332521435 container init 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:18:02 np0005592157 podman[274054]: 2026-01-22 14:18:02.900774489 +0000 UTC m=+0.344017271 container start 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:18:02 np0005592157 podman[274054]: 2026-01-22 14:18:02.9577764 +0000 UTC m=+0.401019202 container attach 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:18:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:03.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:03 np0005592157 nova_compute[245707]: 2026-01-22 14:18:03.592 245711 DEBUG nova.policy [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '32df6d966d7540dd851bf51a1148be65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b4b5b635cbf4888966d80692b78281f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:18:03 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:18:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 272 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 579 KiB/s wr, 11 op/s
Jan 22 09:18:03 np0005592157 recursing_nobel[274070]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:18:03 np0005592157 recursing_nobel[274070]: --> relative data size: 1.0
Jan 22 09:18:03 np0005592157 recursing_nobel[274070]: --> All data devices are unavailable
Jan 22 09:18:03 np0005592157 systemd[1]: libpod-185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1.scope: Deactivated successfully.
Jan 22 09:18:03 np0005592157 podman[274054]: 2026-01-22 14:18:03.73350444 +0000 UTC m=+1.176747222 container died 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:18:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cabee10468c6c8413d3247cf6e2e0daaa2a86facaac8d0340f1a0e92f768d99d-merged.mount: Deactivated successfully.
Jan 22 09:18:03 np0005592157 podman[274054]: 2026-01-22 14:18:03.859665163 +0000 UTC m=+1.302907945 container remove 185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:18:03 np0005592157 systemd[1]: libpod-conmon-185687a86d1620d0a23d0383d8c2322ba83b4c6e2d4aadcc15671ceb4cd2a9c1.scope: Deactivated successfully.
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0046847009933928905 of space, bias 1.0, pg target 1.405410298017867 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:18:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:18:04 np0005592157 nova_compute[245707]: 2026-01-22 14:18:04.464 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.592083481 +0000 UTC m=+0.111937541 container create 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.504076935 +0000 UTC m=+0.023931015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:04 np0005592157 systemd[1]: Started libpod-conmon-48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a.scope.
Jan 22 09:18:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.92146161 +0000 UTC m=+0.441315700 container init 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.929162935 +0000 UTC m=+0.449016995 container start 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:18:04 np0005592157 inspiring_merkle[274255]: 167 167
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.934158335 +0000 UTC m=+0.454012415 container attach 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:18:04 np0005592157 systemd[1]: libpod-48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a.scope: Deactivated successfully.
Jan 22 09:18:04 np0005592157 podman[274238]: 2026-01-22 14:18:04.935168819 +0000 UTC m=+0.455022889 container died 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:18:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-964150f2cca5bdb87c089ef05e6b9379542f54e895f077a661843c66cef93d06-merged.mount: Deactivated successfully.
Jan 22 09:18:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:05.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:05 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:05 np0005592157 podman[274238]: 2026-01-22 14:18:05.570835222 +0000 UTC m=+1.090689272 container remove 48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:18:05 np0005592157 systemd[1]: libpod-conmon-48598734a6fed3c833c15f858cd32a1aa9f46d8d034aa4853a10145e9169867a.scope: Deactivated successfully.
Jan 22 09:18:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 09:18:05 np0005592157 podman[274279]: 2026-01-22 14:18:05.751569107 +0000 UTC m=+0.025556406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:05 np0005592157 podman[274279]: 2026-01-22 14:18:05.941069493 +0000 UTC m=+0.215056812 container create 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:18:06 np0005592157 systemd[1]: Started libpod-conmon-819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca.scope.
Jan 22 09:18:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1659e27b936ede1b439dc044579521158ea4a2e8461c2f62e8a564474e269bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1659e27b936ede1b439dc044579521158ea4a2e8461c2f62e8a564474e269bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1659e27b936ede1b439dc044579521158ea4a2e8461c2f62e8a564474e269bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1659e27b936ede1b439dc044579521158ea4a2e8461c2f62e8a564474e269bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:06 np0005592157 podman[274279]: 2026-01-22 14:18:06.227683673 +0000 UTC m=+0.501670972 container init 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:18:06 np0005592157 podman[274279]: 2026-01-22 14:18:06.238972545 +0000 UTC m=+0.512959824 container start 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:18:06 np0005592157 podman[274279]: 2026-01-22 14:18:06.259758365 +0000 UTC m=+0.533745644 container attach 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.423 245711 DEBUG nova.network.neutron [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Successfully updated port: d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.444 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.444 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquired lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.445 245711 DEBUG nova.network.neutron [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.558 245711 DEBUG nova.compute.manager [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Received event network-changed-d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.559 245711 DEBUG nova.compute.manager [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Refreshing instance network info cache due to event network-changed-d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.560 245711 DEBUG oslo_concurrency.lockutils [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:18:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:06 np0005592157 nova_compute[245707]: 2026-01-22 14:18:06.662 245711 DEBUG nova.network.neutron [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:18:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]: {
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:    "0": [
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:        {
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "devices": [
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "/dev/loop3"
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            ],
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "lv_name": "ceph_lv0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "lv_size": "7511998464",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "name": "ceph_lv0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "tags": {
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.cluster_name": "ceph",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.crush_device_class": "",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.encrypted": "0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.osd_id": "0",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.type": "block",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:                "ceph.vdo": "0"
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            },
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "type": "block",
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:            "vg_name": "ceph_vg0"
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:        }
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]:    ]
Jan 22 09:18:07 np0005592157 suspicious_taussig[274296]: }
Jan 22 09:18:07 np0005592157 systemd[1]: libpod-819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca.scope: Deactivated successfully.
Jan 22 09:18:07 np0005592157 podman[274279]: 2026-01-22 14:18:07.034431269 +0000 UTC m=+1.308418608 container died 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:18:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:07.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:07 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 09:18:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1659e27b936ede1b439dc044579521158ea4a2e8461c2f62e8a564474e269bff-merged.mount: Deactivated successfully.
Jan 22 09:18:08 np0005592157 podman[274279]: 2026-01-22 14:18:08.106129442 +0000 UTC m=+2.380116721 container remove 819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:18:08 np0005592157 systemd[1]: libpod-conmon-819e76c9c3f7dd24b4d26ee7ffa0686d32802658ba48b415908e3efd443e5dca.scope: Deactivated successfully.
Jan 22 09:18:08 np0005592157 podman[274339]: 2026-01-22 14:18:08.338980301 +0000 UTC m=+0.075449735 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:18:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:08.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:08 np0005592157 podman[274475]: 2026-01-22 14:18:08.801346367 +0000 UTC m=+0.025948565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:08 np0005592157 podman[274475]: 2026-01-22 14:18:08.978873744 +0000 UTC m=+0.203475842 container create 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:18:09 np0005592157 systemd[1]: Started libpod-conmon-938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d.scope.
Jan 22 09:18:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:09 np0005592157 podman[274475]: 2026-01-22 14:18:09.420517252 +0000 UTC m=+0.645119440 container init 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:18:09 np0005592157 podman[274475]: 2026-01-22 14:18:09.433442903 +0000 UTC m=+0.658045041 container start 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:18:09 np0005592157 focused_gould[274492]: 167 167
Jan 22 09:18:09 np0005592157 systemd[1]: libpod-938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d.scope: Deactivated successfully.
Jan 22 09:18:09 np0005592157 podman[274475]: 2026-01-22 14:18:09.450842752 +0000 UTC m=+0.675444880 container attach 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:18:09 np0005592157 podman[274475]: 2026-01-22 14:18:09.452535562 +0000 UTC m=+0.677137720 container died 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.465 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.512 245711 DEBUG nova.network.neutron [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Updating instance_info_cache with network_info: [{"id": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "address": "fa:16:3e:5f:40:39", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd62be26a-ce", "ovs_interfaceid": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.538 245711 DEBUG oslo_concurrency.lockutils [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Releasing lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.538 245711 DEBUG nova.compute.manager [None req-ac2e751f-32ee-443a-9076-0c1ae80af21a 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Instance network_info: |[{"id": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "address": "fa:16:3e:5f:40:39", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd62be26a-ce", "ovs_interfaceid": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.539 245711 DEBUG oslo_concurrency.lockutils [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:18:09 np0005592157 nova_compute[245707]: 2026-01-22 14:18:09.539 245711 DEBUG nova.network.neutron [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Refreshing network info cache for port d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:18:09 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 09:18:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1491dc1e74948f6a01cd12ac4f7f7da644f271a6c22acb97c24a264037eb8126-merged.mount: Deactivated successfully.
Jan 22 09:18:10 np0005592157 podman[274475]: 2026-01-22 14:18:10.025116848 +0000 UTC m=+1.249718956 container remove 938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:18:10 np0005592157 systemd[1]: libpod-conmon-938b0a618b38f84e5a6472314b851684e8801201b2f87c2bb208234a7160254d.scope: Deactivated successfully.
Jan 22 09:18:10 np0005592157 podman[274520]: 2026-01-22 14:18:10.234240146 +0000 UTC m=+0.088445587 container create 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:18:10 np0005592157 podman[274520]: 2026-01-22 14:18:10.169299005 +0000 UTC m=+0.023504466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:18:10 np0005592157 systemd[1]: Started libpod-conmon-8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f.scope.
Jan 22 09:18:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:18:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd7e9f21c9676d956ac022134f90d81fe0fea91b79a7984dd48f883cc002dc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd7e9f21c9676d956ac022134f90d81fe0fea91b79a7984dd48f883cc002dc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd7e9f21c9676d956ac022134f90d81fe0fea91b79a7984dd48f883cc002dc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd7e9f21c9676d956ac022134f90d81fe0fea91b79a7984dd48f883cc002dc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:18:10 np0005592157 podman[274520]: 2026-01-22 14:18:10.343441102 +0000 UTC m=+0.197646643 container init 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:18:10 np0005592157 podman[274520]: 2026-01-22 14:18:10.350768388 +0000 UTC m=+0.204973839 container start 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:18:10 np0005592157 podman[274520]: 2026-01-22 14:18:10.373947555 +0000 UTC m=+0.228153016 container attach 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:18:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:10 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:11 np0005592157 focused_payne[274536]: {
Jan 22 09:18:11 np0005592157 focused_payne[274536]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:18:11 np0005592157 focused_payne[274536]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:18:11 np0005592157 focused_payne[274536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:18:11 np0005592157 focused_payne[274536]:        "osd_id": 0,
Jan 22 09:18:11 np0005592157 focused_payne[274536]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:18:11 np0005592157 focused_payne[274536]:        "type": "bluestore"
Jan 22 09:18:11 np0005592157 focused_payne[274536]:    }
Jan 22 09:18:11 np0005592157 focused_payne[274536]: }
Jan 22 09:18:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:11.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:11 np0005592157 systemd[1]: libpod-8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f.scope: Deactivated successfully.
Jan 22 09:18:11 np0005592157 podman[274520]: 2026-01-22 14:18:11.264321341 +0000 UTC m=+1.118526792 container died 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:18:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0cd7e9f21c9676d956ac022134f90d81fe0fea91b79a7984dd48f883cc002dc2-merged.mount: Deactivated successfully.
Jan 22 09:18:11 np0005592157 podman[274520]: 2026-01-22 14:18:11.331774403 +0000 UTC m=+1.185979854 container remove 8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:18:11 np0005592157 systemd[1]: libpod-conmon-8e9528cfaea19ba9ba9162333e1e2e5589457874b3260e1f639a0fdae347957f.scope: Deactivated successfully.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3e7036dc-3555-408f-8b6b-9a8171074a58 does not exist
Jan 22 09:18:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 01501d24-5d83-47c0-a1c9-96ee656ba7a5 does not exist
Jan 22 09:18:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ff8c5679-e9ca-4c4d-ac3b-ea835221da5d does not exist
Jan 22 09:18:11 np0005592157 nova_compute[245707]: 2026-01-22 14:18:11.566 245711 DEBUG nova.network.neutron [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Updated VIF entry in instance network info cache for port d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:18:11 np0005592157 nova_compute[245707]: 2026-01-22 14:18:11.568 245711 DEBUG nova.network.neutron [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Updating instance_info_cache with network_info: [{"id": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "address": "fa:16:3e:5f:40:39", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd62be26a-ce", "ovs_interfaceid": "d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:18:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 09:18:11 np0005592157 nova_compute[245707]: 2026-01-22 14:18:11.688 245711 DEBUG oslo_concurrency.lockutils [req-950138ec-a0ef-451d-bde3-901c636c034b req-b08f4f27-5368-49cb-a766-e07b9521d30c 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.861419) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491861471, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 678, "num_deletes": 255, "total_data_size": 805562, "memory_usage": 819496, "flush_reason": "Manual Compaction"}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491868949, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 784551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40408, "largest_seqno": 41085, "table_properties": {"data_size": 780920, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9036, "raw_average_key_size": 19, "raw_value_size": 773296, "raw_average_value_size": 1703, "num_data_blocks": 61, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091454, "oldest_key_time": 1769091454, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7573 microseconds, and 2843 cpu microseconds.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.868990) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 784551 bytes OK
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.869005) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870813) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870840) EVENT_LOG_v1 {"time_micros": 1769091491870823, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 801869, prev total WAL file size 801869, number of live WAL files 2.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871477) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(766KB)], [86(8252KB)]
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491871549, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 9234737, "oldest_snapshot_seqno": -1}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 8241 keys, 9067486 bytes, temperature: kUnknown
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491943193, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9067486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9019977, "index_size": 25829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 221794, "raw_average_key_size": 26, "raw_value_size": 8876921, "raw_average_value_size": 1077, "num_data_blocks": 985, "num_entries": 8241, "num_filter_entries": 8241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944035) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9067486 bytes
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.947284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.7 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 8766, records dropped: 525 output_compression: NoCompression
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.947343) EVENT_LOG_v1 {"time_micros": 1769091491947322, "job": 50, "event": "compaction_finished", "compaction_time_micros": 71759, "compaction_time_cpu_micros": 23288, "output_level": 6, "num_output_files": 1, "total_output_size": 9067486, "num_input_records": 8766, "num_output_records": 8241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491948131, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491950601, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.950646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.950652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.950655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.950657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:18:11.950659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:12.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:13.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 09:18:13 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:13 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:14 np0005592157 podman[274672]: 2026-01-22 14:18:14.395863048 +0000 UTC m=+0.129496914 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:18:14 np0005592157 nova_compute[245707]: 2026-01-22 14:18:14.466 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 1.2 MiB/s wr, 4 op/s
Jan 22 09:18:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:17 np0005592157 ovn_controller[146940]: 2026-01-22T14:18:17Z|00037|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 22 09:18:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:19 np0005592157 nova_compute[245707]: 2026-01-22 14:18:19.468 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:20 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:22 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:22 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:24 np0005592157 nova_compute[245707]: 2026-01-22 14:18:24.470 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:25.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:25 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:27 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:29 np0005592157 nova_compute[245707]: 2026-01-22 14:18:29.472 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:29 np0005592157 nova_compute[245707]: 2026-01-22 14:18:29.473 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:30 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:30.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:32.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:33 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:34 np0005592157 nova_compute[245707]: 2026-01-22 14:18:34.509 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:34 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:34 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:18:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:18:35 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:36 np0005592157 nova_compute[245707]: 2026-01-22 14:18:36.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:36 np0005592157 nova_compute[245707]: 2026-01-22 14:18:36.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:18:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:36 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:37.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:37 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:39 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:39 np0005592157 podman[274764]: 2026-01-22 14:18:39.342182515 +0000 UTC m=+0.078309614 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.511 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.512 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.513 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.513 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.513 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:39 np0005592157 nova_compute[245707]: 2026-01-22 14:18:39.514 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:40 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:41 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:41 np0005592157 nova_compute[245707]: 2026-01-22 14:18:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:42 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:42 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:43 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:43 np0005592157 nova_compute[245707]: 2026-01-22 14:18:43.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:18:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:18:44 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.515 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.517 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.517 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.517 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.546 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:44 np0005592157 nova_compute[245707]: 2026-01-22 14:18:44.547 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:44.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:18:45 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.276 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.276 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:45 np0005592157 nova_compute[245707]: 2026-01-22 14:18:45.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:18:45 np0005592157 podman[274787]: 2026-01-22 14:18:45.353901554 +0000 UTC m=+0.090246781 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:18:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:46 np0005592157 nova_compute[245707]: 2026-01-22 14:18:46.272 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:46 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:18:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:47 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:47 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:18:47
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control']
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:18:47.590 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:18:47.591 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:18:47.591 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:18:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:47.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.278 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.278 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.278 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.278 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:48 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:18:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:18:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:18:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1217922939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.716 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.869 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.870 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4724MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.870 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.870 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.987 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.988 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.988 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.988 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.988 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:18:48 np0005592157 nova_compute[245707]: 2026-01-22 14:18:48.988 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.084 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:49 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:18:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545743632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.516 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.522 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.541 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.547 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.567 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:18:49 np0005592157 nova_compute[245707]: 2026-01-22 14:18:49.568 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:18:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:49.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:18:50 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:51 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:51 np0005592157 nova_compute[245707]: 2026-01-22 14:18:51.568 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:51.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:52 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:53 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:53 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:54 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.549 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.551 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.551 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.551 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.597 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:54 np0005592157 nova_compute[245707]: 2026-01-22 14:18:54.598 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:54.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:55 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:56 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:57 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:18:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:18:58 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:58 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.599 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.600 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.600 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.600 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.601 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:18:59 np0005592157 nova_compute[245707]: 2026-01-22 14:18:59.602 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:18:59 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:18:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:00 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:01 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:01.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:02 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:02 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:03 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:03.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005357553101153918 of space, bias 1.0, pg target 1.6072659303461754 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:19:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.603 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.605 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.605 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.605 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.642 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:04 np0005592157 nova_compute[245707]: 2026-01-22 14:19:04.643 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:04 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:05.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:05 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:06.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:06 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:19:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:07.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:19:07 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:07 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:08 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:09 np0005592157 nova_compute[245707]: 2026-01-22 14:19:09.644 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:19:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:19:10 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:10 np0005592157 podman[274923]: 2026-01-22 14:19:10.33647127 +0000 UTC m=+0.072608806 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:19:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:11 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:12.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8198e0a8-19e6-47ad-b0b9-3ac9d141b531 does not exist
Jan 22 09:19:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5cb76112-f354-44a0-9d24-f46ae045247e does not exist
Jan 22 09:19:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 164abc76-339c-41b1-a6ac-b77c8c88f544 does not exist
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:19:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:19:13 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:13 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:19:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.329189499 +0000 UTC m=+0.043429265 container create 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:19:13 np0005592157 systemd[1]: Started libpod-conmon-66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b.scope.
Jan 22 09:19:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.311116345 +0000 UTC m=+0.025356131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.418067866 +0000 UTC m=+0.132307642 container init 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.425813472 +0000 UTC m=+0.140053238 container start 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.430670009 +0000 UTC m=+0.144909805 container attach 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:19:13 np0005592157 tender_feynman[275230]: 167 167
Jan 22 09:19:13 np0005592157 systemd[1]: libpod-66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b.scope: Deactivated successfully.
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.433860206 +0000 UTC m=+0.148100002 container died 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:19:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6c204e075056b91a7b466577a48f20ea3cf9cc20c2fc2831ccfc9fbf126fdc97-merged.mount: Deactivated successfully.
Jan 22 09:19:13 np0005592157 podman[275214]: 2026-01-22 14:19:13.492077375 +0000 UTC m=+0.206317141 container remove 66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:19:13 np0005592157 systemd[1]: libpod-conmon-66b5163cd028311376dc93acd2971f3a54a3ea1018630f6cb4d30f12b766a45b.scope: Deactivated successfully.
Jan 22 09:19:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:13 np0005592157 podman[275305]: 2026-01-22 14:19:13.635724909 +0000 UTC m=+0.027051351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:13 np0005592157 podman[275305]: 2026-01-22 14:19:13.772127558 +0000 UTC m=+0.163453970 container create e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:19:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:13.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:13 np0005592157 systemd[1]: Started libpod-conmon-e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b.scope.
Jan 22 09:19:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:14 np0005592157 podman[275305]: 2026-01-22 14:19:14.020954801 +0000 UTC m=+0.412281233 container init e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:19:14 np0005592157 podman[275305]: 2026-01-22 14:19:14.027615671 +0000 UTC m=+0.418942083 container start e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:19:14 np0005592157 podman[275305]: 2026-01-22 14:19:14.031655148 +0000 UTC m=+0.422981580 container attach e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:19:14 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.646 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.648 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.649 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.649 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.693 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:14 np0005592157 nova_compute[245707]: 2026-01-22 14:19:14.694 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:19:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:14.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:19:14 np0005592157 elated_yalow[275321]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:19:14 np0005592157 elated_yalow[275321]: --> relative data size: 1.0
Jan 22 09:19:14 np0005592157 elated_yalow[275321]: --> All data devices are unavailable
Jan 22 09:19:14 np0005592157 systemd[1]: libpod-e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b.scope: Deactivated successfully.
Jan 22 09:19:14 np0005592157 podman[275305]: 2026-01-22 14:19:14.871426137 +0000 UTC m=+1.262752579 container died e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:19:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-564250ca7fdf3c02d675329e8cae8d6ba77bb7104d8a694e25c982062cd2f8c9-merged.mount: Deactivated successfully.
Jan 22 09:19:14 np0005592157 podman[275305]: 2026-01-22 14:19:14.947965287 +0000 UTC m=+1.339291699 container remove e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:19:14 np0005592157 systemd[1]: libpod-conmon-e7ada4764caf04300ad0c16ad5bc39ed57bc2dd7acd7c697a4e7b2444e59582b.scope: Deactivated successfully.
Jan 22 09:19:15 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.552363828 +0000 UTC m=+0.047515273 container create 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:19:15 np0005592157 systemd[1]: Started libpod-conmon-2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c.scope.
Jan 22 09:19:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.526177089 +0000 UTC m=+0.021328554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.640252221 +0000 UTC m=+0.135403686 container init 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.650488507 +0000 UTC m=+0.145639952 container start 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.655108898 +0000 UTC m=+0.150260343 container attach 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:19:15 np0005592157 awesome_kirch[275506]: 167 167
Jan 22 09:19:15 np0005592157 systemd[1]: libpod-2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c.scope: Deactivated successfully.
Jan 22 09:19:15 np0005592157 conmon[275506]: conmon 2b812ce54bf56a08c5dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c.scope/container/memory.events
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.658773506 +0000 UTC m=+0.153924951 container died 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:19:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-35bee534bb96693b2de2932841f2504454b231793242c8fb8dd5599ebeedfabe-merged.mount: Deactivated successfully.
Jan 22 09:19:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:15 np0005592157 podman[275489]: 2026-01-22 14:19:15.703083882 +0000 UTC m=+0.198235327 container remove 2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:19:15 np0005592157 podman[275503]: 2026-01-22 14:19:15.708773978 +0000 UTC m=+0.105386684 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:19:15 np0005592157 systemd[1]: libpod-conmon-2b812ce54bf56a08c5dc6e0dda0e5880aec0ccdb7ded77b8d29ec75d417f680c.scope: Deactivated successfully.
Jan 22 09:19:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:19:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:19:15 np0005592157 podman[275552]: 2026-01-22 14:19:15.880225381 +0000 UTC m=+0.047587676 container create 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:19:15 np0005592157 systemd[1]: Started libpod-conmon-13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e.scope.
Jan 22 09:19:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88bd5e845f65609d851a4b6f7dc8764dda957a6acf40404221e41715a1c7015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:15 np0005592157 podman[275552]: 2026-01-22 14:19:15.861014389 +0000 UTC m=+0.028376714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88bd5e845f65609d851a4b6f7dc8764dda957a6acf40404221e41715a1c7015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88bd5e845f65609d851a4b6f7dc8764dda957a6acf40404221e41715a1c7015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88bd5e845f65609d851a4b6f7dc8764dda957a6acf40404221e41715a1c7015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:15 np0005592157 podman[275552]: 2026-01-22 14:19:15.965777037 +0000 UTC m=+0.133139332 container init 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:19:15 np0005592157 podman[275552]: 2026-01-22 14:19:15.975252115 +0000 UTC m=+0.142614420 container start 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:19:15 np0005592157 podman[275552]: 2026-01-22 14:19:15.979704102 +0000 UTC m=+0.147066397 container attach 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:19:16 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:16.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]: {
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:    "0": [
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:        {
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "devices": [
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "/dev/loop3"
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            ],
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "lv_name": "ceph_lv0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "lv_size": "7511998464",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "name": "ceph_lv0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "tags": {
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.cluster_name": "ceph",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.crush_device_class": "",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.encrypted": "0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.osd_id": "0",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.type": "block",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:                "ceph.vdo": "0"
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            },
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "type": "block",
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:            "vg_name": "ceph_vg0"
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:        }
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]:    ]
Jan 22 09:19:16 np0005592157 agitated_satoshi[275568]: }
Jan 22 09:19:16 np0005592157 systemd[1]: libpod-13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e.scope: Deactivated successfully.
Jan 22 09:19:16 np0005592157 podman[275552]: 2026-01-22 14:19:16.840865765 +0000 UTC m=+1.008228060 container died 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:19:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a88bd5e845f65609d851a4b6f7dc8764dda957a6acf40404221e41715a1c7015-merged.mount: Deactivated successfully.
Jan 22 09:19:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:16 np0005592157 podman[275552]: 2026-01-22 14:19:16.902682081 +0000 UTC m=+1.070044376 container remove 13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:19:16 np0005592157 systemd[1]: libpod-conmon-13ce097ca1d8e126dae995d080ea49f8eaa78559c9fd7483125a78f03a3f148e.scope: Deactivated successfully.
Jan 22 09:19:17 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.586253976 +0000 UTC m=+0.053156949 container create e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:19:17 np0005592157 systemd[1]: Started libpod-conmon-e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600.scope.
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.559036001 +0000 UTC m=+0.025938984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.681343772 +0000 UTC m=+0.148246755 container init e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.69124339 +0000 UTC m=+0.158146343 container start e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.695790399 +0000 UTC m=+0.162693352 container attach e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:19:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:17 np0005592157 sleepy_shamir[275744]: 167 167
Jan 22 09:19:17 np0005592157 systemd[1]: libpod-e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600.scope: Deactivated successfully.
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.702764707 +0000 UTC m=+0.169667650 container died e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:19:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2823778b1a06be8b46a27140e7d03f5d8c16551a42073cd7d9b7b8f79b1a5d5d-merged.mount: Deactivated successfully.
Jan 22 09:19:17 np0005592157 podman[275728]: 2026-01-22 14:19:17.749790597 +0000 UTC m=+0.216693530 container remove e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:19:17 np0005592157 systemd[1]: libpod-conmon-e8b20aa508b6683969ebf7cc5277ad1e01e11902e6b7a1d2a96fc29f82204600.scope: Deactivated successfully.
Jan 22 09:19:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:17.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:17 np0005592157 podman[275768]: 2026-01-22 14:19:17.925905421 +0000 UTC m=+0.044367217 container create 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:19:17 np0005592157 systemd[1]: Started libpod-conmon-21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da.scope.
Jan 22 09:19:18 np0005592157 podman[275768]: 2026-01-22 14:19:17.90798383 +0000 UTC m=+0.026445646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:19:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:19:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7d4c7e2e34f9a8b9cac40189e2beaa2f8139e1e76ceb350c90d075ef84c5c5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7d4c7e2e34f9a8b9cac40189e2beaa2f8139e1e76ceb350c90d075ef84c5c5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7d4c7e2e34f9a8b9cac40189e2beaa2f8139e1e76ceb350c90d075ef84c5c5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7d4c7e2e34f9a8b9cac40189e2beaa2f8139e1e76ceb350c90d075ef84c5c5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:19:18 np0005592157 podman[275768]: 2026-01-22 14:19:18.043850257 +0000 UTC m=+0.162312103 container init 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 22 09:19:18 np0005592157 podman[275768]: 2026-01-22 14:19:18.057523676 +0000 UTC m=+0.175985462 container start 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:19:18 np0005592157 podman[275768]: 2026-01-22 14:19:18.061993143 +0000 UTC m=+0.180454979 container attach 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:19:18 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:18.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]: {
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:        "osd_id": 0,
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:        "type": "bluestore"
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]:    }
Jan 22 09:19:18 np0005592157 affectionate_leavitt[275784]: }
Jan 22 09:19:18 np0005592157 systemd[1]: libpod-21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da.scope: Deactivated successfully.
Jan 22 09:19:18 np0005592157 conmon[275784]: conmon 21fec9fcbf494648b27a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da.scope/container/memory.events
Jan 22 09:19:18 np0005592157 podman[275768]: 2026-01-22 14:19:18.968152839 +0000 UTC m=+1.086614635 container died 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:19:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f7d4c7e2e34f9a8b9cac40189e2beaa2f8139e1e76ceb350c90d075ef84c5c5f-merged.mount: Deactivated successfully.
Jan 22 09:19:19 np0005592157 podman[275768]: 2026-01-22 14:19:19.100562892 +0000 UTC m=+1.219024688 container remove 21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 09:19:19 np0005592157 systemd[1]: libpod-conmon-21fec9fcbf494648b27a568539e00e307be39260a1c9418eee9839e36815e9da.scope: Deactivated successfully.
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fa9d5cd7-e852-44f1-94ca-c2ffc1f9e87a does not exist
Jan 22 09:19:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0b169d6e-2b88-4fc0-8a2d-8dfb66c0a66d does not exist
Jan 22 09:19:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 15cd427e-ceec-42d1-a9bc-09c773704903 does not exist
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592157 nova_compute[245707]: 2026-01-22 14:19:19.695 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:20 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:20.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:21.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:22 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:22.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:23 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:23 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:24 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.698 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.699 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.699 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.699 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.700 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:24 np0005592157 nova_compute[245707]: 2026-01-22 14:19:24.702 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:24.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:25 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:26 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:26.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:27 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:27 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:28 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:19:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:28.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:19:29 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592157 nova_compute[245707]: 2026-01-22 14:19:29.700 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:29 np0005592157 nova_compute[245707]: 2026-01-22 14:19:29.703 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:30 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:30.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:31 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:32.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:33 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:34 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:34 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592157 nova_compute[245707]: 2026-01-22 14:19:34.702 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:34 np0005592157 nova_compute[245707]: 2026-01-22 14:19:34.704 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:34.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:35 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:19:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:19:36 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:36.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:37 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:37 np0005592157 nova_compute[245707]: 2026-01-22 14:19:37.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:37 np0005592157 nova_compute[245707]: 2026-01-22 14:19:37.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:19:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:38 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:38.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:39 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:39 np0005592157 nova_compute[245707]: 2026-01-22 14:19:39.703 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:39 np0005592157 nova_compute[245707]: 2026-01-22 14:19:39.706 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:39.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:40 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:41 np0005592157 nova_compute[245707]: 2026-01-22 14:19:41.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:41 np0005592157 podman[275931]: 2026-01-22 14:19:41.3640855 +0000 UTC m=+0.091300666 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 09:19:41 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:42 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:42 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:42.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:43 np0005592157 nova_compute[245707]: 2026-01-22 14:19:43.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:43 np0005592157 nova_compute[245707]: 2026-01-22 14:19:43.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:43 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:44 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.708 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.710 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.710 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.710 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:44.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.754 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:44 np0005592157 nova_compute[245707]: 2026-01-22 14:19:44.755 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:19:45 np0005592157 nova_compute[245707]: 2026-01-22 14:19:45.240 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:45 np0005592157 nova_compute[245707]: 2026-01-22 14:19:45.261 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:45 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:45.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:46 np0005592157 podman[275960]: 2026-01-22 14:19:46.356282289 +0000 UTC m=+0.091251045 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:19:46 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:46.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.306 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:47 np0005592157 nova_compute[245707]: 2026-01-22 14:19:47.307 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:19:47
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:19:47.591 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:19:47.592 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:19:47.592 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:47 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:47 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:47.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:48 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:48.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:49 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:49 np0005592157 nova_compute[245707]: 2026-01-22 14:19:49.755 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:49.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.512 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.513 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.513 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.513 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.513 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:19:50 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:50.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:19:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314543867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:19:50 np0005592157 nova_compute[245707]: 2026-01-22 14:19:50.962 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.128 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.129 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4729MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.129 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.130 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.202 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.203 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.203 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.203 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.204 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.204 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.293 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:19:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:19:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930567293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:19:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.728 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.734 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:19:51 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.763 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.765 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:19:51 np0005592157 nova_compute[245707]: 2026-01-22 14:19:51.766 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:51.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:52 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:52 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:53 np0005592157 nova_compute[245707]: 2026-01-22 14:19:53.766 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:53 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:53.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:54 np0005592157 nova_compute[245707]: 2026-01-22 14:19:54.808 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:54 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:55 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:55.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:56.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:56 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:58 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:58 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:19:59 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:19:59 np0005592157 nova_compute[245707]: 2026-01-22 14:19:59.811 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:19:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:19:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:19:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:01 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:02.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:02 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:02 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:04 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005357553101153918 of space, bias 1.0, pg target 1.6072659303461754 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:20:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:20:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:04.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.814 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.816 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.816 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.816 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.886 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:04 np0005592157 nova_compute[245707]: 2026-01-22 14:20:04.887 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:05 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:06 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:20:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:06.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:20:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:07 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:07 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:08.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:09 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:09 np0005592157 nova_compute[245707]: 2026-01-22 14:20:09.887 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:10 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:11.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:12 np0005592157 podman[276096]: 2026-01-22 14:20:12.315842151 +0000 UTC m=+0.050865794 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 09:20:12 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:12 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:12.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:13 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:14 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:14.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.889 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.891 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.891 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.891 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.928 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:14 np0005592157 nova_compute[245707]: 2026-01-22 14:20:14.929 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:15 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:15.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:16 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:16.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:17 np0005592157 podman[276168]: 2026-01-22 14:20:17.339812693 +0000 UTC m=+0.077933365 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 22 09:20:17 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:17.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:20:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:20:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:20:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:20:18 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:18.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:19 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:19.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:19 np0005592157 nova_compute[245707]: 2026-01-22 14:20:19.930 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4cd66d92-f7cb-40b2-9ec3-30cdfbe0163e does not exist
Jan 22 09:20:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 663238ae-9078-4203-93a8-cd71d4b9a7e3 does not exist
Jan 22 09:20:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 588652ca-4b8a-42e8-9e69-a0d37f4d2c44 does not exist
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:20:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:20:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.241984846 +0000 UTC m=+0.037548263 container create d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:20:21 np0005592157 systemd[1]: Started libpod-conmon-d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe.scope.
Jan 22 09:20:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.224425944 +0000 UTC m=+0.019989391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.339391738 +0000 UTC m=+0.134955165 container init d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.349407719 +0000 UTC m=+0.144971136 container start d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.353090417 +0000 UTC m=+0.148653844 container attach d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:20:21 np0005592157 distracted_dijkstra[276486]: 167 167
Jan 22 09:20:21 np0005592157 systemd[1]: libpod-d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe.scope: Deactivated successfully.
Jan 22 09:20:21 np0005592157 conmon[276486]: conmon d09ee936e9c0be9f33ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe.scope/container/memory.events
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.358804105 +0000 UTC m=+0.154367522 container died d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:20:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-deed2bd3ba45dd93dc099fddf5a4855653732af58177f215acf5dfe010abcdce-merged.mount: Deactivated successfully.
Jan 22 09:20:21 np0005592157 podman[276469]: 2026-01-22 14:20:21.399340219 +0000 UTC m=+0.194903636 container remove d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:20:21 np0005592157 systemd[1]: libpod-conmon-d09ee936e9c0be9f33ed4b8c5623176a27af4b7ad0361894ea950d61087b16fe.scope: Deactivated successfully.
Jan 22 09:20:21 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:20:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:20:21 np0005592157 podman[276510]: 2026-01-22 14:20:21.570541655 +0000 UTC m=+0.056090589 container create 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:20:21 np0005592157 systemd[1]: Started libpod-conmon-5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818.scope.
Jan 22 09:20:21 np0005592157 podman[276510]: 2026-01-22 14:20:21.541839155 +0000 UTC m=+0.027388189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:21 np0005592157 podman[276510]: 2026-01-22 14:20:21.67178842 +0000 UTC m=+0.157337344 container init 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:20:21 np0005592157 podman[276510]: 2026-01-22 14:20:21.680204102 +0000 UTC m=+0.165753026 container start 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:20:21 np0005592157 podman[276510]: 2026-01-22 14:20:21.683710816 +0000 UTC m=+0.169259750 container attach 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:20:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:21.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:22 np0005592157 romantic_lamport[276526]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:20:22 np0005592157 romantic_lamport[276526]: --> relative data size: 1.0
Jan 22 09:20:22 np0005592157 romantic_lamport[276526]: --> All data devices are unavailable
Jan 22 09:20:22 np0005592157 systemd[1]: libpod-5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818.scope: Deactivated successfully.
Jan 22 09:20:22 np0005592157 podman[276510]: 2026-01-22 14:20:22.527247646 +0000 UTC m=+1.012796580 container died 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:20:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-37dda9528d8df9ae28605afcbdb751ee160ca3eca16d0393d0c5de72290020ea-merged.mount: Deactivated successfully.
Jan 22 09:20:22 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:22 np0005592157 podman[276510]: 2026-01-22 14:20:22.596771038 +0000 UTC m=+1.082319972 container remove 5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:20:22 np0005592157 systemd[1]: libpod-conmon-5ad10c9eb517d8e21ccda30e481cb7dff2e604443db1cfcce97e6d116990b818.scope: Deactivated successfully.
Jan 22 09:20:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.226992849 +0000 UTC m=+0.052891412 container create a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:20:23 np0005592157 systemd[1]: Started libpod-conmon-a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709.scope.
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.199188971 +0000 UTC m=+0.025087534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.3235181 +0000 UTC m=+0.149416663 container init a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.331614535 +0000 UTC m=+0.157513078 container start a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.335717583 +0000 UTC m=+0.161616126 container attach a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:20:23 np0005592157 kind_boyd[276710]: 167 167
Jan 22 09:20:23 np0005592157 systemd[1]: libpod-a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709.scope: Deactivated successfully.
Jan 22 09:20:23 np0005592157 conmon[276710]: conmon a2e8a0f9e1559e29d471 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709.scope/container/memory.events
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.339182816 +0000 UTC m=+0.165081359 container died a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:20:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-788032cd30d9919f650cf8cbc88ae4f224a7ec8909c7e1f2bfd0032c5f42eb5e-merged.mount: Deactivated successfully.
Jan 22 09:20:23 np0005592157 podman[276694]: 2026-01-22 14:20:23.382498088 +0000 UTC m=+0.208396631 container remove a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_boyd, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:20:23 np0005592157 systemd[1]: libpod-conmon-a2e8a0f9e1559e29d47190bbd1c7442130447621363362983cdaacda704ab709.scope: Deactivated successfully.
Jan 22 09:20:23 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:23 np0005592157 podman[276737]: 2026-01-22 14:20:23.593351157 +0000 UTC m=+0.068242842 container create 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:23 np0005592157 systemd[1]: Started libpod-conmon-5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f.scope.
Jan 22 09:20:23 np0005592157 podman[276737]: 2026-01-22 14:20:23.569833002 +0000 UTC m=+0.044724737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e61eea51c79d72f2111a8b7c6a41622eaaa60a8c1b8efa391bafcfa88abc8da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e61eea51c79d72f2111a8b7c6a41622eaaa60a8c1b8efa391bafcfa88abc8da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e61eea51c79d72f2111a8b7c6a41622eaaa60a8c1b8efa391bafcfa88abc8da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e61eea51c79d72f2111a8b7c6a41622eaaa60a8c1b8efa391bafcfa88abc8da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:23 np0005592157 podman[276737]: 2026-01-22 14:20:23.7049609 +0000 UTC m=+0.179852605 container init 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:23 np0005592157 podman[276737]: 2026-01-22 14:20:23.7136904 +0000 UTC m=+0.188582085 container start 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:20:23 np0005592157 podman[276737]: 2026-01-22 14:20:23.71825054 +0000 UTC m=+0.193142225 container attach 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:20:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:23.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:24 np0005592157 focused_davinci[276754]: {
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:    "0": [
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:        {
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "devices": [
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "/dev/loop3"
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            ],
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "lv_name": "ceph_lv0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "lv_size": "7511998464",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "name": "ceph_lv0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "tags": {
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.cluster_name": "ceph",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.crush_device_class": "",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.encrypted": "0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.osd_id": "0",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.type": "block",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:                "ceph.vdo": "0"
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            },
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "type": "block",
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:            "vg_name": "ceph_vg0"
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:        }
Jan 22 09:20:24 np0005592157 focused_davinci[276754]:    ]
Jan 22 09:20:24 np0005592157 focused_davinci[276754]: }
Jan 22 09:20:24 np0005592157 systemd[1]: libpod-5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f.scope: Deactivated successfully.
Jan 22 09:20:24 np0005592157 podman[276737]: 2026-01-22 14:20:24.556057201 +0000 UTC m=+1.030948956 container died 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:20:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6e61eea51c79d72f2111a8b7c6a41622eaaa60a8c1b8efa391bafcfa88abc8da-merged.mount: Deactivated successfully.
Jan 22 09:20:24 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:24 np0005592157 podman[276737]: 2026-01-22 14:20:24.635744907 +0000 UTC m=+1.110636602 container remove 5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:20:24 np0005592157 systemd[1]: libpod-conmon-5224e1df9385faa598a5a85de0e2423329aab87b6c9ad53b2862d3858a175a1f.scope: Deactivated successfully.
Jan 22 09:20:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:24 np0005592157 nova_compute[245707]: 2026-01-22 14:20:24.932 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.357685084 +0000 UTC m=+0.087681149 container create cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.297484546 +0000 UTC m=+0.027480701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:25 np0005592157 systemd[1]: Started libpod-conmon-cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a.scope.
Jan 22 09:20:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.451086079 +0000 UTC m=+0.181082164 container init cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.458556219 +0000 UTC m=+0.188552284 container start cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:20:25 np0005592157 heuristic_visvesvaraya[276929]: 167 167
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.46403419 +0000 UTC m=+0.194030265 container attach cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:20:25 np0005592157 systemd[1]: libpod-cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a.scope: Deactivated successfully.
Jan 22 09:20:25 np0005592157 conmon[276929]: conmon cb9e685b5092c64bedab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a.scope/container/memory.events
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.467069903 +0000 UTC m=+0.197065998 container died cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:20:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-de14df6f54f9e8c07e31c3005754b621de7a7ec1d7d8c1adadbf8dafe077de5d-merged.mount: Deactivated successfully.
Jan 22 09:20:25 np0005592157 podman[276913]: 2026-01-22 14:20:25.522015194 +0000 UTC m=+0.252011259 container remove cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:25 np0005592157 systemd[1]: libpod-conmon-cb9e685b5092c64bedabe8aef7763fa417872c292c1dccb71984389ea0aa2b6a.scope: Deactivated successfully.
Jan 22 09:20:25 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592157 podman[276955]: 2026-01-22 14:20:25.676652782 +0000 UTC m=+0.040921005 container create 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:20:25 np0005592157 systemd[1]: Started libpod-conmon-0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937.scope.
Jan 22 09:20:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:20:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfa565037e23cc791b014d93ac0453a176b9166f354289774fcbe16eebc05be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfa565037e23cc791b014d93ac0453a176b9166f354289774fcbe16eebc05be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfa565037e23cc791b014d93ac0453a176b9166f354289774fcbe16eebc05be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:25 np0005592157 podman[276955]: 2026-01-22 14:20:25.659875239 +0000 UTC m=+0.024143482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:20:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfa565037e23cc791b014d93ac0453a176b9166f354289774fcbe16eebc05be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:20:25 np0005592157 podman[276955]: 2026-01-22 14:20:25.769616357 +0000 UTC m=+0.133884590 container init 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:20:25 np0005592157 podman[276955]: 2026-01-22 14:20:25.779590177 +0000 UTC m=+0.143858410 container start 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:25 np0005592157 podman[276955]: 2026-01-22 14:20:25.783212484 +0000 UTC m=+0.147480727 container attach 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:20:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:25.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:26 np0005592157 adoring_borg[276971]: {
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:        "osd_id": 0,
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:        "type": "bluestore"
Jan 22 09:20:26 np0005592157 adoring_borg[276971]:    }
Jan 22 09:20:26 np0005592157 adoring_borg[276971]: }
Jan 22 09:20:26 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:26 np0005592157 systemd[1]: libpod-0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937.scope: Deactivated successfully.
Jan 22 09:20:26 np0005592157 podman[276955]: 2026-01-22 14:20:26.675154838 +0000 UTC m=+1.039423071 container died 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:20:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3dfa565037e23cc791b014d93ac0453a176b9166f354289774fcbe16eebc05be-merged.mount: Deactivated successfully.
Jan 22 09:20:26 np0005592157 podman[276955]: 2026-01-22 14:20:26.741293688 +0000 UTC m=+1.105561911 container remove 0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:20:26 np0005592157 systemd[1]: libpod-conmon-0baf62fd091425600eb164a4e4acab561b8a185b6142db2933bd55a3bba47937.scope: Deactivated successfully.
Jan 22 09:20:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:20:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:20:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:20:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:26.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:20:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d64dc686-0f1c-4c73-a9dc-7df9e40e77aa does not exist
Jan 22 09:20:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 357cc313-dc90-49e0-aa06-527a18ddce33 does not exist
Jan 22 09:20:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e9a3ffc3-f619-492b-a04c-fe8470ca1357 does not exist
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:27.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.912591) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627912696, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1814, "num_deletes": 251, "total_data_size": 2538915, "memory_usage": 2582320, "flush_reason": "Manual Compaction"}
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627931127, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2486047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41086, "largest_seqno": 42899, "table_properties": {"data_size": 2478446, "index_size": 4159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19604, "raw_average_key_size": 21, "raw_value_size": 2461643, "raw_average_value_size": 2664, "num_data_blocks": 180, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091492, "oldest_key_time": 1769091492, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 18833 microseconds, and 7872 cpu microseconds.
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.931430) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2486047 bytes OK
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.931521) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.933970) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.934028) EVENT_LOG_v1 {"time_micros": 1769091627934013, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.934068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2531090, prev total WAL file size 2546828, number of live WAL files 2.
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2427KB)], [89(8854KB)]
Jan 22 09:20:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627936371, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11553533, "oldest_snapshot_seqno": -1}
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 8650 keys, 9903565 bytes, temperature: kUnknown
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628012847, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9903565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9853031, "index_size": 27830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 231936, "raw_average_key_size": 26, "raw_value_size": 9702283, "raw_average_value_size": 1121, "num_data_blocks": 1064, "num_entries": 8650, "num_filter_entries": 8650, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.013430) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9903565 bytes
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.016551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 9165, records dropped: 515 output_compression: NoCompression
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.016571) EVENT_LOG_v1 {"time_micros": 1769091628016561, "job": 52, "event": "compaction_finished", "compaction_time_micros": 76743, "compaction_time_cpu_micros": 27620, "output_level": 6, "num_output_files": 1, "total_output_size": 9903565, "num_input_records": 9165, "num_output_records": 8650, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628017548, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628019938, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.020118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.020127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.020129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.020130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:20:28.020132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:28 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:29.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:29 np0005592157 nova_compute[245707]: 2026-01-22 14:20:29.934 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:30 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:30.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:31 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:31.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:32 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:32 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:32.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:33 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:34 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:34.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.938 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.940 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.940 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.940 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.973 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:34 np0005592157 nova_compute[245707]: 2026-01-22 14:20:34.974 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:20:35 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:36 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:36.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:37 np0005592157 nova_compute[245707]: 2026-01-22 14:20:37.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:37 np0005592157 nova_compute[245707]: 2026-01-22 14:20:37.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:20:37 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:37 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:37.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:38 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:38.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:39 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:39 np0005592157 nova_compute[245707]: 2026-01-22 14:20:39.974 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:20:40 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:40.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:41 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:41.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:42 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:43 np0005592157 nova_compute[245707]: 2026-01-22 14:20:43.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:43 np0005592157 podman[277113]: 2026-01-22 14:20:43.385739646 +0000 UTC m=+0.099625297 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:20:43 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:43 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:43.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:44 np0005592157 nova_compute[245707]: 2026-01-22 14:20:44.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:44 np0005592157 nova_compute[245707]: 2026-01-22 14:20:44.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:44 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:44 np0005592157 nova_compute[245707]: 2026-01-22 14:20:44.976 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:45 np0005592157 nova_compute[245707]: 2026-01-22 14:20:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:45 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:46 np0005592157 nova_compute[245707]: 2026-01-22 14:20:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:46 np0005592157 nova_compute[245707]: 2026-01-22 14:20:46.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:20:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.334 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.335 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.335 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.355 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.355 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.355 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.355 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:47 np0005592157 nova_compute[245707]: 2026-01-22 14:20:47.355 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:20:47
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'volumes', '.rgw.root', 'images']
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:20:47 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:20:47.593 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:20:47.594 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:20:47.595 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:48 np0005592157 nova_compute[245707]: 2026-01-22 14:20:48.260 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:48 np0005592157 podman[277136]: 2026-01-22 14:20:48.380851198 +0000 UTC m=+0.113549071 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:20:48 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:48 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:49 np0005592157 nova_compute[245707]: 2026-01-22 14:20:49.978 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:50 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:51 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:51.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.276 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.277 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.277 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954756605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:52 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.734 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:20:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.889 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.890 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4757MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.890 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:52 np0005592157 nova_compute[245707]: 2026-01-22 14:20:52.891 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.196 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.196 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.196 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.197 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.197 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.197 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.257 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing inventories for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.305 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating ProviderTree inventory for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.306 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating inventory in ProviderTree for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.327 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing aggregate associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.349 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing trait associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.447 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:20:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:53 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:20:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394538097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.881 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.888 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:20:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.994 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.996 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:20:53 np0005592157 nova_compute[245707]: 2026-01-22 14:20:53.997 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:54 np0005592157 nova_compute[245707]: 2026-01-22 14:20:54.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:54 np0005592157 nova_compute[245707]: 2026-01-22 14:20:54.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:54 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:54 np0005592157 nova_compute[245707]: 2026-01-22 14:20:54.980 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:55 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:56 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:56.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:57 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:57 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:58 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:20:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:20:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:59 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:20:59.984 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:20:59.986 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:20:59.986 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:20:59.986 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:21:00.018 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:00 np0005592157 nova_compute[245707]: 2026-01-22 14:21:00.019 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:01 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:02 np0005592157 nova_compute[245707]: 2026-01-22 14:21:02.373 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:02 np0005592157 nova_compute[245707]: 2026-01-22 14:21:02.374 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:21:02 np0005592157 nova_compute[245707]: 2026-01-22 14:21:02.392 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:21:02 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:03 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:03 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:03.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005357553101153918 of space, bias 1.0, pg target 1.6072659303461754 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:21:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 22 09:21:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:04 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:04 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:05 np0005592157 nova_compute[245707]: 2026-01-22 14:21:05.019 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:05 np0005592157 nova_compute[245707]: 2026-01-22 14:21:05.020 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:05.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:05 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:07 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:07.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:08 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:08 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:08.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:09 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.021 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.022 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.022 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.022 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.023 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:10 np0005592157 nova_compute[245707]: 2026-01-22 14:21:10.024 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:10 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:10.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:11 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:12 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:13 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:13 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:14 np0005592157 podman[277272]: 2026-01-22 14:21:14.326725151 +0000 UTC m=+0.053815505 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:21:14 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:14.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.025 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.026 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.027 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.027 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.061 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:15 np0005592157 nova_compute[245707]: 2026-01-22 14:21:15.061 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:15 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:15.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:16 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:16.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:17 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:17.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:21:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:21:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:21:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:21:18 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:19 np0005592157 podman[277344]: 2026-01-22 14:21:19.347980599 +0000 UTC m=+0.086566602 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:21:19 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:20 np0005592157 nova_compute[245707]: 2026-01-22 14:21:20.063 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:21 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:22 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:22 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:23 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:23 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:24 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.065 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.067 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.067 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.067 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.119 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:25 np0005592157 nova_compute[245707]: 2026-01-22 14:21:25.120 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:25 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:26 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:27 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dc921c6f-7542-4126-ab6d-ccc174dc0080 does not exist
Jan 22 09:21:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8a093d56-d09a-4cd4-9d8e-dc05d412917b does not exist
Jan 22 09:21:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0f43fd3d-8ef7-42b6-9195-3c5184bb8b79 does not exist
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:21:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:21:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:28 np0005592157 podman[277646]: 2026-01-22 14:21:28.921735236 +0000 UTC m=+0.036158251 container create f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:21:28 np0005592157 systemd[1]: Started libpod-conmon-f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80.scope.
Jan 22 09:21:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:28 np0005592157 podman[277646]: 2026-01-22 14:21:28.997715642 +0000 UTC m=+0.112138667 container init f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:21:29 np0005592157 podman[277646]: 2026-01-22 14:21:28.906409067 +0000 UTC m=+0.020832112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:29 np0005592157 podman[277646]: 2026-01-22 14:21:29.004340302 +0000 UTC m=+0.118763327 container start f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:21:29 np0005592157 podman[277646]: 2026-01-22 14:21:29.007492317 +0000 UTC m=+0.121915332 container attach f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:21:29 np0005592157 thirsty_mcclintock[277662]: 167 167
Jan 22 09:21:29 np0005592157 systemd[1]: libpod-f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80.scope: Deactivated successfully.
Jan 22 09:21:29 np0005592157 conmon[277662]: conmon f93ba5b2ad6ac5d406f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80.scope/container/memory.events
Jan 22 09:21:29 np0005592157 podman[277646]: 2026-01-22 14:21:29.011228247 +0000 UTC m=+0.125651262 container died f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d5c7315ce3a5df40ccf595e514a7595b977fe6b20fca3ececf60b2a00f8cedf8-merged.mount: Deactivated successfully.
Jan 22 09:21:29 np0005592157 podman[277646]: 2026-01-22 14:21:29.057756786 +0000 UTC m=+0.172179801 container remove f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:21:29 np0005592157 systemd[1]: libpod-conmon-f93ba5b2ad6ac5d406f9a214f91cdaa647bc9b2393085a3aff10ab3bc17afe80.scope: Deactivated successfully.
Jan 22 09:21:29 np0005592157 podman[277686]: 2026-01-22 14:21:29.213391118 +0000 UTC m=+0.040015223 container create 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:21:29 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:21:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:21:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:21:29 np0005592157 systemd[1]: Started libpod-conmon-2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686.scope.
Jan 22 09:21:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:29 np0005592157 podman[277686]: 2026-01-22 14:21:29.196749607 +0000 UTC m=+0.023373692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:29 np0005592157 podman[277686]: 2026-01-22 14:21:29.310032381 +0000 UTC m=+0.136656476 container init 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:29 np0005592157 podman[277686]: 2026-01-22 14:21:29.316563988 +0000 UTC m=+0.143188053 container start 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:29 np0005592157 podman[277686]: 2026-01-22 14:21:29.320144374 +0000 UTC m=+0.146768439 container attach 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:30.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:30 np0005592157 tender_snyder[277702]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:21:30 np0005592157 tender_snyder[277702]: --> relative data size: 1.0
Jan 22 09:21:30 np0005592157 tender_snyder[277702]: --> All data devices are unavailable
Jan 22 09:21:30 np0005592157 nova_compute[245707]: 2026-01-22 14:21:30.121 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:30 np0005592157 nova_compute[245707]: 2026-01-22 14:21:30.125 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:30 np0005592157 systemd[1]: libpod-2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686.scope: Deactivated successfully.
Jan 22 09:21:30 np0005592157 podman[277718]: 2026-01-22 14:21:30.191889032 +0000 UTC m=+0.024821028 container died 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-572a9b4281547fb677593be3665973fbee132fa5cdf5e21cb0a2c0a71e65f1fc-merged.mount: Deactivated successfully.
Jan 22 09:21:30 np0005592157 podman[277718]: 2026-01-22 14:21:30.244998399 +0000 UTC m=+0.077930375 container remove 2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:21:30 np0005592157 systemd[1]: libpod-conmon-2af77b03074e2b79e75d1a399d4f8b8f9f79f08097fcb01142779b3a357f5686.scope: Deactivated successfully.
Jan 22 09:21:30 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.895606981 +0000 UTC m=+0.038904647 container create 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:21:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:30.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:30 np0005592157 systemd[1]: Started libpod-conmon-265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340.scope.
Jan 22 09:21:30 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.880296743 +0000 UTC m=+0.023594429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.979459027 +0000 UTC m=+0.122756783 container init 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.989139749 +0000 UTC m=+0.132437405 container start 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.992574102 +0000 UTC m=+0.135871768 container attach 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:21:30 np0005592157 unruffled_kirch[277889]: 167 167
Jan 22 09:21:30 np0005592157 systemd[1]: libpod-265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340.scope: Deactivated successfully.
Jan 22 09:21:30 np0005592157 podman[277873]: 2026-01-22 14:21:30.99458448 +0000 UTC m=+0.137882146 container died 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:21:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4e5d6aff2f4284ab30a969a6278b8e17e13ce7436b896695e12489b36b6e3fc0-merged.mount: Deactivated successfully.
Jan 22 09:21:31 np0005592157 podman[277873]: 2026-01-22 14:21:31.038450275 +0000 UTC m=+0.181747941 container remove 265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:21:31 np0005592157 systemd[1]: libpod-conmon-265033cd3784a53b581a0bfdd4617d2d8a625a9122fc988ba652389351658340.scope: Deactivated successfully.
Jan 22 09:21:31 np0005592157 podman[277912]: 2026-01-22 14:21:31.221226829 +0000 UTC m=+0.040771751 container create f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:21:31 np0005592157 systemd[1]: Started libpod-conmon-f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5.scope.
Jan 22 09:21:31 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91c279e069c90e67793cb8aa1df5e5701a1d796781cc84a856235909d221ae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:31 np0005592157 podman[277912]: 2026-01-22 14:21:31.203719808 +0000 UTC m=+0.023264750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91c279e069c90e67793cb8aa1df5e5701a1d796781cc84a856235909d221ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91c279e069c90e67793cb8aa1df5e5701a1d796781cc84a856235909d221ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91c279e069c90e67793cb8aa1df5e5701a1d796781cc84a856235909d221ae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:31 np0005592157 podman[277912]: 2026-01-22 14:21:31.316177742 +0000 UTC m=+0.135722694 container init f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:21:31 np0005592157 podman[277912]: 2026-01-22 14:21:31.324340948 +0000 UTC m=+0.143885870 container start f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:21:31 np0005592157 podman[277912]: 2026-01-22 14:21:31.327619347 +0000 UTC m=+0.147164269 container attach f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:21:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:32.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:32 np0005592157 nice_greider[277929]: {
Jan 22 09:21:32 np0005592157 nice_greider[277929]:    "0": [
Jan 22 09:21:32 np0005592157 nice_greider[277929]:        {
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "devices": [
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "/dev/loop3"
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            ],
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "lv_name": "ceph_lv0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "lv_size": "7511998464",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "name": "ceph_lv0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "tags": {
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.cluster_name": "ceph",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.crush_device_class": "",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.encrypted": "0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.osd_id": "0",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.type": "block",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:                "ceph.vdo": "0"
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            },
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "type": "block",
Jan 22 09:21:32 np0005592157 nice_greider[277929]:            "vg_name": "ceph_vg0"
Jan 22 09:21:32 np0005592157 nice_greider[277929]:        }
Jan 22 09:21:32 np0005592157 nice_greider[277929]:    ]
Jan 22 09:21:32 np0005592157 nice_greider[277929]: }
Jan 22 09:21:32 np0005592157 systemd[1]: libpod-f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5.scope: Deactivated successfully.
Jan 22 09:21:32 np0005592157 podman[277912]: 2026-01-22 14:21:32.145158361 +0000 UTC m=+0.964703283 container died f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:21:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f91c279e069c90e67793cb8aa1df5e5701a1d796781cc84a856235909d221ae3-merged.mount: Deactivated successfully.
Jan 22 09:21:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:32 np0005592157 podman[277912]: 2026-01-22 14:21:32.20376928 +0000 UTC m=+1.023314202 container remove f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:21:32 np0005592157 systemd[1]: libpod-conmon-f1f9ce2f3b8373af0d1b9c6f68c7aea12cab9a3e171dc2bb7a0dbcb7470b24d5.scope: Deactivated successfully.
Jan 22 09:21:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:32 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.880052839 +0000 UTC m=+0.055036154 container create cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:21:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:32 np0005592157 systemd[1]: Started libpod-conmon-cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252.scope.
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.852861875 +0000 UTC m=+0.027845270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.972649215 +0000 UTC m=+0.147632550 container init cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.983819054 +0000 UTC m=+0.158802359 container start cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:21:32 np0005592157 epic_einstein[278109]: 167 167
Jan 22 09:21:32 np0005592157 systemd[1]: libpod-cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252.scope: Deactivated successfully.
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.98864776 +0000 UTC m=+0.163631085 container attach cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:21:32 np0005592157 podman[278093]: 2026-01-22 14:21:32.989179683 +0000 UTC m=+0.164162978 container died cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:21:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-460fdec0072072910d81d3733f6a0e8f0da107650c198a7b166688605910191d-merged.mount: Deactivated successfully.
Jan 22 09:21:33 np0005592157 podman[278093]: 2026-01-22 14:21:33.032265178 +0000 UTC m=+0.207248473 container remove cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:21:33 np0005592157 systemd[1]: libpod-conmon-cc5e1f6b1eaa8070e5d8fffde49e84d8c7ab2f4c463a9cbc950e28a5f0793252.scope: Deactivated successfully.
Jan 22 09:21:33 np0005592157 podman[278134]: 2026-01-22 14:21:33.196535458 +0000 UTC m=+0.040068825 container create 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:21:33 np0005592157 systemd[1]: Started libpod-conmon-89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d.scope.
Jan 22 09:21:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:21:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90dea24433c0334b08c6d1a4d21c8112e80bb206a9b8634d97a54004ff5a325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90dea24433c0334b08c6d1a4d21c8112e80bb206a9b8634d97a54004ff5a325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90dea24433c0334b08c6d1a4d21c8112e80bb206a9b8634d97a54004ff5a325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90dea24433c0334b08c6d1a4d21c8112e80bb206a9b8634d97a54004ff5a325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:21:33 np0005592157 podman[278134]: 2026-01-22 14:21:33.273355954 +0000 UTC m=+0.116889381 container init 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:21:33 np0005592157 podman[278134]: 2026-01-22 14:21:33.179390805 +0000 UTC m=+0.022924192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:21:33 np0005592157 podman[278134]: 2026-01-22 14:21:33.284226346 +0000 UTC m=+0.127759703 container start 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:21:33 np0005592157 podman[278134]: 2026-01-22 14:21:33.287968096 +0000 UTC m=+0.131501453 container attach 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:21:33 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:33 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:34.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]: {
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:        "osd_id": 0,
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:        "type": "bluestore"
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]:    }
Jan 22 09:21:34 np0005592157 intelligent_cerf[278150]: }
Jan 22 09:21:34 np0005592157 systemd[1]: libpod-89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d.scope: Deactivated successfully.
Jan 22 09:21:34 np0005592157 podman[278134]: 2026-01-22 14:21:34.160591194 +0000 UTC m=+1.004124541 container died 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:21:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c90dea24433c0334b08c6d1a4d21c8112e80bb206a9b8634d97a54004ff5a325-merged.mount: Deactivated successfully.
Jan 22 09:21:34 np0005592157 podman[278134]: 2026-01-22 14:21:34.216324014 +0000 UTC m=+1.059857361 container remove 89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:21:34 np0005592157 systemd[1]: libpod-conmon-89331a327e255ae9930eaaa96d25961bceaa2e51970fb47e0246552e9f0ba06d.scope: Deactivated successfully.
Jan 22 09:21:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:21:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:21:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bda812ae-3fd5-4e1a-a03e-2cc019efa1e9 does not exist
Jan 22 09:21:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e45bb4ad-b2d8-41db-abd8-9515e1c51b56 does not exist
Jan 22 09:21:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b8baa21c-c43c-4e9e-9f7a-baa007cbadfa does not exist
Jan 22 09:21:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:34.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:35 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592157 nova_compute[245707]: 2026-01-22 14:21:35.125 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:36.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:36 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:36 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:21:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:36.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:21:37 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:37 np0005592157 nova_compute[245707]: 2026-01-22 14:21:37.264 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:37 np0005592157 nova_compute[245707]: 2026-01-22 14:21:37.264 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:21:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:38.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:38 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:39 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:40.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.128 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.130 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.130 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.131 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.162 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.163 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:21:40 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:40.698 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:21:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:40.699 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:21:40 np0005592157 nova_compute[245707]: 2026-01-22 14:21:40.699 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:40 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:40.700 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:21:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:40.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:41 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:42.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:42 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:42.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:43 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:43 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:44 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:44.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:45 np0005592157 podman[278292]: 2026-01-22 14:21:45.11469773 +0000 UTC m=+0.063427356 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:21:45 np0005592157 nova_compute[245707]: 2026-01-22 14:21:45.163 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:45 np0005592157 nova_compute[245707]: 2026-01-22 14:21:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:45 np0005592157 nova_compute[245707]: 2026-01-22 14:21:45.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:45 np0005592157 nova_compute[245707]: 2026-01-22 14:21:45.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:45 np0005592157 nova_compute[245707]: 2026-01-22 14:21:45.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:45 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:21:46 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.241 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.268 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.268 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.268 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.295 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.296 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.296 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.296 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:47 np0005592157 nova_compute[245707]: 2026-01-22 14:21:47.296 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:21:47
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:21:47 np0005592157 ceph-mon[74359]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:47 np0005592157 ceph-mon[74359]: Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:47.594 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:47.594 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:21:47.594 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:48 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:49 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:50.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:50 np0005592157 nova_compute[245707]: 2026-01-22 14:21:50.164 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:50 np0005592157 nova_compute[245707]: 2026-01-22 14:21:50.295 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:50 np0005592157 podman[278318]: 2026-01-22 14:21:50.351468327 +0000 UTC m=+0.088222912 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 09:21:50 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:50.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:51 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:21:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:52 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:52.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 313 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 756 KiB/s wr, 14 op/s
Jan 22 09:21:53 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:53 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:54.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.270 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.271 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.271 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.271 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.271 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699726964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.691 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.845 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.847 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4721MB free_disk=20.867183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.847 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.847 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:54.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.943 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.943 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.943 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.944 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.944 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:21:54 np0005592157 nova_compute[245707]: 2026-01-22 14:21:54.944 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.022 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:55 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.166 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661583463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.464 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.470 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.491 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.494 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:21:55 np0005592157 nova_compute[245707]: 2026-01-22 14:21:55.494 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 09:21:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:56.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:56 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:56 np0005592157 nova_compute[245707]: 2026-01-22 14:21:56.495 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:21:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 9492 writes, 43K keys, 9489 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9492 writes, 9489 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1846 writes, 8338 keys, 1844 commit groups, 1.0 writes per commit group, ingest: 10.93 MB, 0.02 MB/s#012Interval WAL: 1846 writes, 1844 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.0      0.61              0.23        26    0.023       0      0       0.0       0.0#012  L6      1/0    9.44 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.3    111.8     94.0      2.30              0.89        25    0.092    170K    14K       0.0       0.0#012 Sum      1/0    9.44 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.3     88.4     91.7      2.91              1.12        51    0.057    170K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2     96.5     97.4      0.65              0.25        12    0.054     52K   3071       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    111.8     94.0      2.30              0.89        25    0.092    170K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     83.5      0.60              0.23        25    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.049, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.09 MB/s write, 0.25 GB read, 0.09 MB/s read, 2.9 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 28.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000184 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1533,27.41 MB,9.01628%) FilterBlock(52,492.73 KB,0.158285%) IndexBlock(52,702.20 KB,0.225574%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:21:57 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 09:21:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 09:21:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:58.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 09:21:58 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:21:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:58.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:59 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.276 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "df283133-db55-4a7e-a651-12dd25bae88e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.276 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "df283133-db55-4a7e-a651-12dd25bae88e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.302 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.327 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "a8700e89-4334-472c-bf9a-9e203a561f43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.328 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.361 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.388 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.389 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.400 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.400 245711 INFO nova.compute.claims [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.436 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.570 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 22 09:21:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894054400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.993 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:59 np0005592157 nova_compute[245707]: 2026-01-22 14:21:59.999 245711 DEBUG nova.compute.provider_tree [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.018 245711 DEBUG nova.scheduler.client.report [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.049 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.050 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.056 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.056 245711 INFO nova.compute.claims [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:22:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.125 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "e9026970-063d-478f-88fc-ca4b764cc7dc" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.126 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "e9026970-063d-478f-88fc-ca4b764cc7dc" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.152 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "e9026970-063d-478f-88fc-ca4b764cc7dc" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.152 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.168 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.196 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.197 245711 DEBUG nova.network.neutron [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.214 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.234 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:22:00 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.331 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.352 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.353 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.354 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Creating image(s)#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.380 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.407 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.434 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.437 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.506 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.507 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.508 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.508 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.532 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.536 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 df283133-db55-4a7e-a651-12dd25bae88e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264698857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.762 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.767 245711 DEBUG nova.compute.provider_tree [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.784 245711 DEBUG nova.scheduler.client.report [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.817 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.828 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 df283133-db55-4a7e-a651-12dd25bae88e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.866 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "e9026970-063d-478f-88fc-ca4b764cc7dc" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.866 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "e9026970-063d-478f-88fc-ca4b764cc7dc" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.914 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "e9026970-063d-478f-88fc-ca4b764cc7dc" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.915 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.926 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] resizing rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:22:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.968 245711 DEBUG nova.network.neutron [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.968 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.972 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.972 245711 DEBUG nova.network.neutron [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:22:00 np0005592157 nova_compute[245707]: 2026-01-22 14:22:00.996 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.053 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.061 245711 DEBUG nova.objects.instance [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'migration_context' on Instance uuid df283133-db55-4a7e-a651-12dd25bae88e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.079 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.080 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Ensure instance console log exists: /var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.080 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.080 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.081 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.082 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.087 245711 WARNING nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.091 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.092 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.095 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.096 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.097 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.097 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.098 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.098 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.098 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.098 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.098 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.099 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.099 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.099 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.099 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.099 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.102 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.174 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.176 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.176 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Creating image(s)#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.206 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.238 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.264 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.268 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:01 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.325 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.326 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.327 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.327 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.353 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.358 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a8700e89-4334-472c-bf9a-9e203a561f43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:22:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780644383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.530 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.565 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.569 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.618 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a8700e89-4334-472c-bf9a-9e203a561f43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.689 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] resizing rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.724 245711 DEBUG nova.network.neutron [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.725 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:22:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.792 245711 DEBUG nova.objects.instance [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'migration_context' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.815 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.815 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Ensure instance console log exists: /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.816 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.816 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.816 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.818 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.823 245711 WARNING nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.845 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.847 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.853 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.854 245711 DEBUG nova.virt.libvirt.host [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.855 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.856 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.856 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.856 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.856 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.857 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.857 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.857 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.857 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.858 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.858 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.858 245711 DEBUG nova.virt.hardware [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:22:01 np0005592157 nova_compute[245707]: 2026-01-22 14:22:01.861 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336972401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.024 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.027 245711 DEBUG nova.objects.instance [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df283133-db55-4a7e-a651-12dd25bae88e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.043 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <uuid>df283133-db55-4a7e-a651-12dd25bae88e</uuid>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <name>instance-0000000e</name>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <memory>131072</memory>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <vcpu>1</vcpu>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <metadata>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1730607557-1</nova:name>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:creationTime>2026-01-22 14:22:01</nova:creationTime>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:flavor name="m1.nano">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:memory>128</nova:memory>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:disk>1</nova:disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:swap>0</nova:swap>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </nova:flavor>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:owner>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:user uuid="a5be1e8103e142238ae4c912393095c4">tempest-ServersOnMultiNodesTest-59245381-project-member</nova:user>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:project uuid="688eff2d52114848b8ae16c9cfaa49d9">tempest-ServersOnMultiNodesTest-59245381</nova:project>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </nova:owner>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:ports/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </nova:instance>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </metadata>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <sysinfo type="smbios">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <system>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="serial">df283133-db55-4a7e-a651-12dd25bae88e</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="uuid">df283133-db55-4a7e-a651-12dd25bae88e</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </system>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </sysinfo>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <os>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <boot dev="hd"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <smbios mode="sysinfo"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </os>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <features>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <acpi/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <apic/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <vmcoreinfo/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </features>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <clock offset="utc">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="hpet" present="no"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </clock>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <cpu mode="custom" match="exact">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <model>Nehalem</model>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <devices>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <disk type="network" device="disk">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/df283133-db55-4a7e-a651-12dd25bae88e_disk">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <target dev="vda" bus="virtio"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <disk type="network" device="cdrom">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/df283133-db55-4a7e-a651-12dd25bae88e_disk.config">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <target dev="sda" bus="sata"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <serial type="pty">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <log file="/var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/console.log" append="off"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </serial>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <video>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </video>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <input type="tablet" bus="usb"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <rng model="virtio">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </rng>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="usb" index="0"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <memballoon model="virtio">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <stats period="10"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </memballoon>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </devices>
Jan 22 09:22:02 np0005592157 nova_compute[245707]: </domain>
Jan 22 09:22:02 np0005592157 nova_compute[245707]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:22:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.090 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.091 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.092 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Using config drive#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.123 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785481063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.402 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Creating config drive at /var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/disk.config#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.407 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j2fdhrj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.429 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.457 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.461 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.538 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j2fdhrj" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.567 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image df283133-db55-4a7e-a651-12dd25bae88e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.570 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df283133-db55-4a7e-a651-12dd25bae88e/disk.config df283133-db55-4a7e-a651-12dd25bae88e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:22:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130573325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.897 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.899 245711 DEBUG nova.objects.instance [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:02 np0005592157 nova_compute[245707]: 2026-01-22 14:22:02.920 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <uuid>a8700e89-4334-472c-bf9a-9e203a561f43</uuid>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <name>instance-0000000f</name>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <memory>131072</memory>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <vcpu>1</vcpu>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <metadata>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1730607557-2</nova:name>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:creationTime>2026-01-22 14:22:01</nova:creationTime>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:flavor name="m1.nano">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:memory>128</nova:memory>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:disk>1</nova:disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:swap>0</nova:swap>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </nova:flavor>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:owner>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:user uuid="a5be1e8103e142238ae4c912393095c4">tempest-ServersOnMultiNodesTest-59245381-project-member</nova:user>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <nova:project uuid="688eff2d52114848b8ae16c9cfaa49d9">tempest-ServersOnMultiNodesTest-59245381</nova:project>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </nova:owner>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <nova:ports/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </nova:instance>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </metadata>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <sysinfo type="smbios">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <system>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="serial">a8700e89-4334-472c-bf9a-9e203a561f43</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="uuid">a8700e89-4334-472c-bf9a-9e203a561f43</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </system>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </sysinfo>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <os>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <boot dev="hd"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <smbios mode="sysinfo"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </os>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <features>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <acpi/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <apic/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <vmcoreinfo/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </features>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <clock offset="utc">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <timer name="hpet" present="no"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </clock>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <cpu mode="custom" match="exact">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <model>Nehalem</model>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  <devices>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <disk type="network" device="disk">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/a8700e89-4334-472c-bf9a-9e203a561f43_disk">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <target dev="vda" bus="virtio"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <disk type="network" device="cdrom">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/a8700e89-4334-472c-bf9a-9e203a561f43_disk.config">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <target dev="sda" bus="sata"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <serial type="pty">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <log file="/var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/console.log" append="off"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </serial>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <video>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </video>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <input type="tablet" bus="usb"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <rng model="virtio">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </rng>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <controller type="usb" index="0"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    <memballoon model="virtio">
Jan 22 09:22:02 np0005592157 nova_compute[245707]:      <stats period="10"/>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:    </memballoon>
Jan 22 09:22:02 np0005592157 nova_compute[245707]:  </devices>
Jan 22 09:22:02 np0005592157 nova_compute[245707]: </domain>
Jan 22 09:22:02 np0005592157 nova_compute[245707]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:22:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.036 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.037 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.037 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Using config drive#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.063 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.590 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Creating config drive at /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.596 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qdwsx0x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.740 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qdwsx0x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.769 245711 DEBUG nova.storage.rbd_utils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image a8700e89-4334-472c-bf9a-9e203a561f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:03 np0005592157 nova_compute[245707]: 2026-01-22 14:22:03.773 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config a8700e89-4334-472c-bf9a-9e203a561f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 369 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.073 245711 DEBUG oslo_concurrency.processutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config a8700e89-4334-472c-bf9a-9e203a561f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.075 245711 INFO nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Deleting local config drive /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43/disk.config because it was imported into RBD.#033[00m
Jan 22 09:22:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:04.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:04 np0005592157 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:22:04 np0005592157 systemd[1]: Started libvirt secret daemon.
Jan 22 09:22:04 np0005592157 systemd-machined[211644]: New machine qemu-3-instance-0000000f.
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006773595989205286 of space, bias 1.0, pg target 2.032078796761586 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.0001083251907686581 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:22:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 22 09:22:04 np0005592157 systemd[1]: Started Virtual Machine qemu-3-instance-0000000f.
Jan 22 09:22:04 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.810 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091724.809691, a8700e89-4334-472c-bf9a-9e203a561f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.812 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.815 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.815 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.819 245711 INFO nova.virt.libvirt.driver [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance spawned successfully.#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.820 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.843 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.847 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.867 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.868 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.868 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.868 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.869 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.869 245711 DEBUG nova.virt.libvirt.driver [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.927 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.927 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769091724.8113196, a8700e89-4334-472c-bf9a-9e203a561f43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.927 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] VM Started (Lifecycle Event)#033[00m
Jan 22 09:22:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:04.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.980 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:22:04 np0005592157 nova_compute[245707]: 2026-01-22 14:22:04.984 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.005 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.010 245711 INFO nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.011 245711 DEBUG nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.108 245711 INFO nova.compute.manager [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Took 5.69 seconds to build instance.#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.169 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:05 np0005592157 nova_compute[245707]: 2026-01-22 14:22:05.181 245711 DEBUG oslo_concurrency.lockutils [None req-0430eb2f-a475-4c95-afe7-aeb37669496f a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:05 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 150 op/s
Jan 22 09:22:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:06.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:06 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:06.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:07 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 22 09:22:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:08 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:09 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 450 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Jan 22 09:22:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:10.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.172 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.176 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.176 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.176 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.199 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:10 np0005592157 nova_compute[245707]: 2026-01-22 14:22:10.201 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:22:10 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.004000096s ======
Jan 22 09:22:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000096s
Jan 22 09:22:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 213 op/s
Jan 22 09:22:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:12.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:12.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Jan 22 09:22:14 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:14.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:14.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:15 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592157 nova_compute[245707]: 2026-01-22 14:22:15.201 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:15 np0005592157 podman[279211]: 2026-01-22 14:22:15.365794718 +0000 UTC m=+0.086384508 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:22:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Jan 22 09:22:16 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:16.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:17 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 22 09:22:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:18.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:18 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:18 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:18.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:19 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 22 09:22:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:20.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:20 np0005592157 nova_compute[245707]: 2026-01-22 14:22:20.201 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:20 np0005592157 nova_compute[245707]: 2026-01-22 14:22:20.208 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:20 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:20.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:21 np0005592157 podman[279237]: 2026-01-22 14:22:21.344495925 +0000 UTC m=+0.080241560 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 09:22:21 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Jan 22 09:22:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:22.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:22 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:23 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:23 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 09:22:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:24.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:24 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:25 np0005592157 nova_compute[245707]: 2026-01-22 14:22:25.203 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:25 np0005592157 nova_compute[245707]: 2026-01-22 14:22:25.210 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:25 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 09:22:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:26.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:26 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:26.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:27 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 3 op/s
Jan 22 09:22:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:28.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:28 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:28.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:29 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 479 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 351 KiB/s wr, 6 op/s
Jan 22 09:22:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:30.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:30 np0005592157 nova_compute[245707]: 2026-01-22 14:22:30.208 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:30 np0005592157 nova_compute[245707]: 2026-01-22 14:22:30.211 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:30 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:31 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 17 op/s
Jan 22 09:22:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:32.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:33 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:33 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:22:34 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:34 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:35 np0005592157 nova_compute[245707]: 2026-01-22 14:22:35.208 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:35 np0005592157 nova_compute[245707]: 2026-01-22 14:22:35.212 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 69fd5447-93ce-4b72-a9a9-fde02880bfa4 does not exist
Jan 22 09:22:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 36761596-f069-495f-a198-29fcba34ba7e does not exist
Jan 22 09:22:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 32d1488a-5a4b-4eac-b95e-571cbf386a6f does not exist
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:22:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:22:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.10736775 +0000 UTC m=+0.048460006 container create 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:22:36 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:22:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:22:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:36 np0005592157 systemd[1]: Started libpod-conmon-381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8.scope.
Jan 22 09:22:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.085013514 +0000 UTC m=+0.026105790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.194785043 +0000 UTC m=+0.135877399 container init 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.202989637 +0000 UTC m=+0.144081893 container start 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.206590126 +0000 UTC m=+0.147682402 container attach 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:22:36 np0005592157 pedantic_gauss[279619]: 167 167
Jan 22 09:22:36 np0005592157 systemd[1]: libpod-381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8.scope: Deactivated successfully.
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.216355929 +0000 UTC m=+0.157448205 container died 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:22:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-22741bbba0ffde300509ad3e8ed2583cc879c46df83063873f7b3015dec882e8-merged.mount: Deactivated successfully.
Jan 22 09:22:36 np0005592157 podman[279602]: 2026-01-22 14:22:36.261791429 +0000 UTC m=+0.202883685 container remove 381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:22:36 np0005592157 systemd[1]: libpod-conmon-381d98141add98b4acd944b6410ff9ee33fc0851e2043e9b097379cbde5279c8.scope: Deactivated successfully.
Jan 22 09:22:36 np0005592157 podman[279644]: 2026-01-22 14:22:36.416784662 +0000 UTC m=+0.038913019 container create b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:22:36 np0005592157 systemd[1]: Started libpod-conmon-b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0.scope.
Jan 22 09:22:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:36 np0005592157 podman[279644]: 2026-01-22 14:22:36.399209855 +0000 UTC m=+0.021338242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:36 np0005592157 podman[279644]: 2026-01-22 14:22:36.496702479 +0000 UTC m=+0.118830856 container init b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:22:36 np0005592157 podman[279644]: 2026-01-22 14:22:36.502547374 +0000 UTC m=+0.124675731 container start b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:22:36 np0005592157 podman[279644]: 2026-01-22 14:22:36.508131503 +0000 UTC m=+0.130259890 container attach b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:22:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:37 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:37 np0005592157 tender_sammet[279661]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:22:37 np0005592157 tender_sammet[279661]: --> relative data size: 1.0
Jan 22 09:22:37 np0005592157 tender_sammet[279661]: --> All data devices are unavailable
Jan 22 09:22:37 np0005592157 systemd[1]: libpod-b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0.scope: Deactivated successfully.
Jan 22 09:22:37 np0005592157 podman[279644]: 2026-01-22 14:22:37.404834235 +0000 UTC m=+1.026962602 container died b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:22:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9a3202fab96a2d4acec19d5e31c28899ccc976fd5ab46fd325bed07728ac479d-merged.mount: Deactivated successfully.
Jan 22 09:22:37 np0005592157 podman[279644]: 2026-01-22 14:22:37.505572409 +0000 UTC m=+1.127700766 container remove b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:22:37 np0005592157 systemd[1]: libpod-conmon-b0b95c1cc92b980d8b668c027286dc13550c17e65126e22ecf269e6c02b21be0.scope: Deactivated successfully.
Jan 22 09:22:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.080218615 +0000 UTC m=+0.035358710 container create f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:22:38 np0005592157 systemd[1]: Started libpod-conmon-f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31.scope.
Jan 22 09:22:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:38.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.153523198 +0000 UTC m=+0.108663323 container init f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.159805594 +0000 UTC m=+0.114945689 container start f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.065410987 +0000 UTC m=+0.020551102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.163186368 +0000 UTC m=+0.118326463 container attach f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:22:38 np0005592157 musing_murdock[279848]: 167 167
Jan 22 09:22:38 np0005592157 systemd[1]: libpod-f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31.scope: Deactivated successfully.
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.164448579 +0000 UTC m=+0.119588674 container died f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:22:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2133c29150b2a034c480ce235d08273b2924b405657c053a585d246acfebb0aa-merged.mount: Deactivated successfully.
Jan 22 09:22:38 np0005592157 podman[279832]: 2026-01-22 14:22:38.200008813 +0000 UTC m=+0.155148908 container remove f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:22:38 np0005592157 systemd[1]: libpod-conmon-f5a05e0c9183eefcd44b60cf8b8f6a65c8cf9cab4aec0a569747da0a20779f31.scope: Deactivated successfully.
Jan 22 09:22:38 np0005592157 nova_compute[245707]: 2026-01-22 14:22:38.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:38 np0005592157 nova_compute[245707]: 2026-01-22 14:22:38.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:22:38 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:38 np0005592157 ceph-mon[74359]: Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:38 np0005592157 podman[279871]: 2026-01-22 14:22:38.35715093 +0000 UTC m=+0.040709893 container create 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:22:38 np0005592157 systemd[1]: Started libpod-conmon-1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413.scope.
Jan 22 09:22:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef1d413019589497320668ee5bebc910c3318b2b260f6c56743bdc07d4e79b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef1d413019589497320668ee5bebc910c3318b2b260f6c56743bdc07d4e79b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef1d413019589497320668ee5bebc910c3318b2b260f6c56743bdc07d4e79b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef1d413019589497320668ee5bebc910c3318b2b260f6c56743bdc07d4e79b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:38 np0005592157 podman[279871]: 2026-01-22 14:22:38.433375825 +0000 UTC m=+0.116934798 container init 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:22:38 np0005592157 podman[279871]: 2026-01-22 14:22:38.340766493 +0000 UTC m=+0.024325486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:38 np0005592157 podman[279871]: 2026-01-22 14:22:38.440686867 +0000 UTC m=+0.124245830 container start 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:22:38 np0005592157 podman[279871]: 2026-01-22 14:22:38.444757268 +0000 UTC m=+0.128316251 container attach 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:22:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]: {
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:    "0": [
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:        {
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "devices": [
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "/dev/loop3"
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            ],
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "lv_name": "ceph_lv0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "lv_size": "7511998464",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "name": "ceph_lv0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "tags": {
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.cluster_name": "ceph",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.crush_device_class": "",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.encrypted": "0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.osd_id": "0",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.type": "block",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:                "ceph.vdo": "0"
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            },
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "type": "block",
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:            "vg_name": "ceph_vg0"
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:        }
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]:    ]
Jan 22 09:22:39 np0005592157 wonderful_jepsen[279888]: }
Jan 22 09:22:39 np0005592157 systemd[1]: libpod-1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413.scope: Deactivated successfully.
Jan 22 09:22:39 np0005592157 podman[279871]: 2026-01-22 14:22:39.221432897 +0000 UTC m=+0.904991860 container died 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:22:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5ef1d413019589497320668ee5bebc910c3318b2b260f6c56743bdc07d4e79b5-merged.mount: Deactivated successfully.
Jan 22 09:22:39 np0005592157 podman[279871]: 2026-01-22 14:22:39.278147057 +0000 UTC m=+0.961706020 container remove 1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:22:39 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:39 np0005592157 systemd[1]: libpod-conmon-1d50ba563cc5625ec701b3b42b0eafd58ca2f948c8784fb6ac5c330bfb623413.scope: Deactivated successfully.
Jan 22 09:22:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.869686013 +0000 UTC m=+0.045140433 container create bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:22:39 np0005592157 systemd[1]: Started libpod-conmon-bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde.scope.
Jan 22 09:22:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.849888691 +0000 UTC m=+0.025343161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.944132234 +0000 UTC m=+0.119586674 container init bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.949938098 +0000 UTC m=+0.125392518 container start bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.953491176 +0000 UTC m=+0.128945616 container attach bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:22:39 np0005592157 thirsty_robinson[280068]: 167 167
Jan 22 09:22:39 np0005592157 systemd[1]: libpod-bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde.scope: Deactivated successfully.
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.955200179 +0000 UTC m=+0.130654589 container died bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:22:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-29352bfe887bdf27ca019d58c2b6653db3022c41c879297c34ded1ec04e27082-merged.mount: Deactivated successfully.
Jan 22 09:22:39 np0005592157 podman[280052]: 2026-01-22 14:22:39.993288896 +0000 UTC m=+0.168743316 container remove bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:22:39 np0005592157 systemd[1]: libpod-conmon-bccf1c80c44ef14951d645f3558f2ae366cdb112c6843b07452258ce80322bde.scope: Deactivated successfully.
Jan 22 09:22:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:40 np0005592157 podman[280091]: 2026-01-22 14:22:40.161379853 +0000 UTC m=+0.037539024 container create 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:22:40 np0005592157 systemd[1]: Started libpod-conmon-0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481.scope.
Jan 22 09:22:40 np0005592157 nova_compute[245707]: 2026-01-22 14:22:40.210 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:22:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289c5e9b8db583c5c49d6cbe927df9bc399141461604190b8bcd51265100da4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289c5e9b8db583c5c49d6cbe927df9bc399141461604190b8bcd51265100da4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289c5e9b8db583c5c49d6cbe927df9bc399141461604190b8bcd51265100da4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289c5e9b8db583c5c49d6cbe927df9bc399141461604190b8bcd51265100da4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:22:40 np0005592157 podman[280091]: 2026-01-22 14:22:40.236731297 +0000 UTC m=+0.112890478 container init 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:22:40 np0005592157 podman[280091]: 2026-01-22 14:22:40.144084054 +0000 UTC m=+0.020243245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:22:40 np0005592157 podman[280091]: 2026-01-22 14:22:40.244820218 +0000 UTC m=+0.120979389 container start 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:22:40 np0005592157 podman[280091]: 2026-01-22 14:22:40.248723155 +0000 UTC m=+0.124882346 container attach 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:22:40 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:40.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:41 np0005592157 strange_kilby[280107]: {
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:        "osd_id": 0,
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:        "type": "bluestore"
Jan 22 09:22:41 np0005592157 strange_kilby[280107]:    }
Jan 22 09:22:41 np0005592157 strange_kilby[280107]: }
Jan 22 09:22:41 np0005592157 systemd[1]: libpod-0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481.scope: Deactivated successfully.
Jan 22 09:22:41 np0005592157 podman[280091]: 2026-01-22 14:22:41.152449072 +0000 UTC m=+1.028608243 container died 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:22:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-289c5e9b8db583c5c49d6cbe927df9bc399141461604190b8bcd51265100da4d-merged.mount: Deactivated successfully.
Jan 22 09:22:41 np0005592157 podman[280091]: 2026-01-22 14:22:41.207654325 +0000 UTC m=+1.083813496 container remove 0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:22:41 np0005592157 systemd[1]: libpod-conmon-0e94d672afc6d1a5468ea55372c876c0ea22c57f78d77c1f18081b1e9dac7481.scope: Deactivated successfully.
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 51e688fe-5648-4a51-b6e5-a9dc1c46b302 does not exist
Jan 22 09:22:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 31babc9c-26b2-4e39-9b04-aa348c12842f does not exist
Jan 22 09:22:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d8dcdbc7-1d80-41a7-972b-d0c33c8c6815 does not exist
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 09:22:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:42 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:43 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:43 np0005592157 ceph-mon[74359]: Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 09:22:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:43.890 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:22:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:43.892 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:22:43 np0005592157 nova_compute[245707]: 2026-01-22 14:22:43.891 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:44 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:45 np0005592157 nova_compute[245707]: 2026-01-22 14:22:45.212 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:45 np0005592157 nova_compute[245707]: 2026-01-22 14:22:45.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:45 np0005592157 nova_compute[245707]: 2026-01-22 14:22:45.247 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:45 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 09:22:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:46 np0005592157 nova_compute[245707]: 2026-01-22 14:22:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:46 np0005592157 podman[280199]: 2026-01-22 14:22:46.350861307 +0000 UTC m=+0.085689052 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:22:46 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:22:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:47.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:47 np0005592157 nova_compute[245707]: 2026-01-22 14:22:47.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.288503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767288626, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1853, "num_deletes": 256, "total_data_size": 2634931, "memory_usage": 2677544, "flush_reason": "Manual Compaction"}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767320350, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 2581347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42900, "largest_seqno": 44752, "table_properties": {"data_size": 2573555, "index_size": 4350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19900, "raw_average_key_size": 21, "raw_value_size": 2556335, "raw_average_value_size": 2699, "num_data_blocks": 188, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091627, "oldest_key_time": 1769091627, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 31910 microseconds, and 8084 cpu microseconds.
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.320457) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 2581347 bytes OK
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.320500) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.322846) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.322875) EVENT_LOG_v1 {"time_micros": 1769091767322869, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.322895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 2626900, prev total WAL file size 2626900, number of live WAL files 2.
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.324120) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(2520KB)], [92(9671KB)]
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767324222, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12484912, "oldest_snapshot_seqno": -1}
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:22:47
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 9072 keys, 12329174 bytes, temperature: kUnknown
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767431152, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 12329174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12273813, "index_size": 31569, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 242657, "raw_average_key_size": 26, "raw_value_size": 12113716, "raw_average_value_size": 1335, "num_data_blocks": 1217, "num_entries": 9072, "num_filter_entries": 9072, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.431522) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12329174 bytes
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.432802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.6 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 9597, records dropped: 525 output_compression: NoCompression
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.432820) EVENT_LOG_v1 {"time_micros": 1769091767432811, "job": 54, "event": "compaction_finished", "compaction_time_micros": 107036, "compaction_time_cpu_micros": 50507, "output_level": 6, "num_output_files": 1, "total_output_size": 12329174, "num_input_records": 9597, "num_output_records": 9072, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767433681, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767435604, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.323946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.435651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.435658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.435659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.435661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:22:47.435662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:47.595 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:47.595 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:47.596 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 09:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:22:47.894 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:22:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:48 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:49.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.290 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.291 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.291 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.292 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.292 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:49 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.470 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.470 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.470 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.471 245711 DEBUG nova.objects.instance [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.666 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:22:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 09:22:49 np0005592157 nova_compute[245707]: 2026-01-22 14:22:49.956 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:22:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.214 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.215 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.215 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.215 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.216 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.217 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.333 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:22:50 np0005592157 nova_compute[245707]: 2026-01-22 14:22:50.333 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:22:50 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:51.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:51 np0005592157 nova_compute[245707]: 2026-01-22 14:22:51.328 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:51 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 09:22:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:52 np0005592157 podman[280228]: 2026-01-22 14:22:52.346777796 +0000 UTC m=+0.084114292 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:22:52 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:52 np0005592157 ceph-mon[74359]: Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:53 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:22:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:54 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:55 np0005592157 nova_compute[245707]: 2026-01-22 14:22:55.217 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:55 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:22:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:56.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.275 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.275 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.275 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.275 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.276 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:56 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542191379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.713 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.812 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.813 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.816 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.817 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.946 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.947 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4512MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.948 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:56 np0005592157 nova_compute[245707]: 2026-01-22 14:22:56.948 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.044 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.044 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.044 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.045 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance a8700e89-4334-472c-bf9a-9e203a561f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.046 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.046 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.181 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278762935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:57 np0005592157 ceph-mon[74359]: Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.590 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.595 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.613 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.634 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:22:57 np0005592157 nova_compute[245707]: 2026-01-22 14:22:57.635 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:22:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:58 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:58 np0005592157 nova_compute[245707]: 2026-01-22 14:22:58.634 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:22:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:59 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.094 245711 DEBUG oslo_concurrency.lockutils [None req-93f38e26-f771-4021-91c1-de525365e7fa 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:00.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.219 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.220 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.220 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.220 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.221 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:00 np0005592157 nova_compute[245707]: 2026-01-22 14:23:00.222 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:00 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:01 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:02 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:03 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010321245986878839 of space, bias 1.0, pg target 3.0963737960636517 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:23:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:23:04 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:05 np0005592157 nova_compute[245707]: 2026-01-22 14:23:05.222 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:05 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:06 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:07 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:08.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:08 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:09 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:10.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:10 np0005592157 nova_compute[245707]: 2026-01-22 14:23:10.224 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:10 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:11 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:12 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:12 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:13 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:14.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:14 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:15 np0005592157 nova_compute[245707]: 2026-01-22 14:23:15.226 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:15 np0005592157 nova_compute[245707]: 2026-01-22 14:23:15.228 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:15 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:16.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:16 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:17 np0005592157 podman[280427]: 2026-01-22 14:23:17.31112555 +0000 UTC m=+0.049804609 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:23:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.982716) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797982756, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 616, "num_deletes": 251, "total_data_size": 582741, "memory_usage": 593896, "flush_reason": "Manual Compaction"}
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797988374, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 573365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44753, "largest_seqno": 45368, "table_properties": {"data_size": 570254, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8063, "raw_average_key_size": 19, "raw_value_size": 563778, "raw_average_value_size": 1375, "num_data_blocks": 42, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091767, "oldest_key_time": 1769091767, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 6035 microseconds, and 2615 cpu microseconds.
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.988746) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 573365 bytes OK
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.988767) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991135) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991158) EVENT_LOG_v1 {"time_micros": 1769091797991144, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991176) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 579402, prev total WAL file size 579402, number of live WAL files 2.
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991638) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(559KB)], [95(11MB)]
Jan 22 09:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797991717, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 12902539, "oldest_snapshot_seqno": -1}
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 8972 keys, 11173549 bytes, temperature: kUnknown
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798082221, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 11173549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119788, "index_size": 30225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 241466, "raw_average_key_size": 26, "raw_value_size": 10962072, "raw_average_value_size": 1221, "num_data_blocks": 1156, "num_entries": 8972, "num_filter_entries": 8972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.082816) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11173549 bytes
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084465) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 122.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(42.0) write-amplify(19.5) OK, records in: 9482, records dropped: 510 output_compression: NoCompression
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084488) EVENT_LOG_v1 {"time_micros": 1769091798084478, "job": 56, "event": "compaction_finished", "compaction_time_micros": 90910, "compaction_time_cpu_micros": 25858, "output_level": 6, "num_output_files": 1, "total_output_size": 11173549, "num_input_records": 9482, "num_output_records": 8972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798085079, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798088683, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:18.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:23:18 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:20 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:20.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:20 np0005592157 nova_compute[245707]: 2026-01-22 14:23:20.227 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:21 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:22.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:22 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:23.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:23 np0005592157 podman[280453]: 2026-01-22 14:23:23.33480156 +0000 UTC m=+0.071966251 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:23:23 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:23 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:24.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:24 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:25 np0005592157 nova_compute[245707]: 2026-01-22 14:23:25.229 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:25 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:26.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:26 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:27 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:28.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:28 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:29 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:30.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.231 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.232 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.232 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.232 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.233 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:30 np0005592157 nova_compute[245707]: 2026-01-22 14:23:30.233 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:30 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:31 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:32.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:32 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:33 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:34.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:34 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:34 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:35 np0005592157 nova_compute[245707]: 2026-01-22 14:23:35.235 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:35 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:36 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:37 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:38.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:38 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:39 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.237 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.238 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.238 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.238 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.239 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.239 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:40 np0005592157 nova_compute[245707]: 2026-01-22 14:23:40.243 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:23:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:40 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:41 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:42 np0005592157 podman[280714]: 2026-01-22 14:23:42.389612359 +0000 UTC m=+0.056017924 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:23:42 np0005592157 podman[280714]: 2026-01-22 14:23:42.486350854 +0000 UTC m=+0.152756409 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:23:42 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:43 np0005592157 podman[280864]: 2026-01-22 14:23:43.075818849 +0000 UTC m=+0.049027300 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:23:43 np0005592157 podman[280864]: 2026-01-22 14:23:43.082374702 +0000 UTC m=+0.055583133 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:23:43 np0005592157 podman[280931]: 2026-01-22 14:23:43.287798729 +0000 UTC m=+0.059071840 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 09:23:43 np0005592157 podman[280931]: 2026-01-22 14:23:43.301287754 +0000 UTC m=+0.072560855 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.openshift.tags=Ceph keepalived, vcs-type=git, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, architecture=x86_64, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Jan 22 09:23:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:23:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:23:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 542d77cb-a320-48ca-bc44-7ccf3c85abd1 does not exist
Jan 22 09:23:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0543d167-9122-4837-afb6-b62799b9c868 does not exist
Jan 22 09:23:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f55707d9-13a7-4731-8635-584b4f70e22d does not exist
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:23:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.719917671 +0000 UTC m=+0.064585647 container create 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:23:44 np0005592157 systemd[1]: Started libpod-conmon-8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1.scope.
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.686832289 +0000 UTC m=+0.031500335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.809687923 +0000 UTC m=+0.154355869 container init 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.819699392 +0000 UTC m=+0.164367338 container start 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.825134587 +0000 UTC m=+0.169802683 container attach 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:23:44 np0005592157 boring_fermat[281252]: 167 167
Jan 22 09:23:44 np0005592157 systemd[1]: libpod-8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1.scope: Deactivated successfully.
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.828260715 +0000 UTC m=+0.172928671 container died 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:23:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6f93c93eabdad95e8af125a7c77a91c93d414799b85749a7691b0b5d0c0d5ff8-merged.mount: Deactivated successfully.
Jan 22 09:23:44 np0005592157 podman[281236]: 2026-01-22 14:23:44.872210187 +0000 UTC m=+0.216878123 container remove 8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:23:44 np0005592157 systemd[1]: libpod-conmon-8e817388415c591c30c94a4218fcf07433319cff03fb47326b7e92d7fc2249e1.scope: Deactivated successfully.
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.041385953 +0000 UTC m=+0.051898911 container create 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:23:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:45 np0005592157 systemd[1]: Started libpod-conmon-9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae.scope.
Jan 22 09:23:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.021779816 +0000 UTC m=+0.032292794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.133758499 +0000 UTC m=+0.144271477 container init 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.140339953 +0000 UTC m=+0.150852901 container start 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.143695876 +0000 UTC m=+0.154208824 container attach 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:23:45 np0005592157 nova_compute[245707]: 2026-01-22 14:23:45.240 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:45 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:45 np0005592157 xenodochial_nobel[281292]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:23:45 np0005592157 xenodochial_nobel[281292]: --> relative data size: 1.0
Jan 22 09:23:45 np0005592157 xenodochial_nobel[281292]: --> All data devices are unavailable
Jan 22 09:23:45 np0005592157 systemd[1]: libpod-9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae.scope: Deactivated successfully.
Jan 22 09:23:45 np0005592157 podman[281276]: 2026-01-22 14:23:45.965335913 +0000 UTC m=+0.975848861 container died 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:23:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3b43ed92af32dd306c022adf3fdd10c9eaadb9d24d412472bf0bd670d7124810-merged.mount: Deactivated successfully.
Jan 22 09:23:46 np0005592157 podman[281276]: 2026-01-22 14:23:46.017565052 +0000 UTC m=+1.028078000 container remove 9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:23:46 np0005592157 systemd[1]: libpod-conmon-9e0b89a7fe1e6fc90aa643efe78ddf5bc8e574a197c3b7fe9903c7783008d8ae.scope: Deactivated successfully.
Jan 22 09:23:46 np0005592157 nova_compute[245707]: 2026-01-22 14:23:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:46 np0005592157 nova_compute[245707]: 2026-01-22 14:23:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:46.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:23:46 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.635034542 +0000 UTC m=+0.042421675 container create e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:23:46 np0005592157 systemd[1]: Started libpod-conmon-e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408.scope.
Jan 22 09:23:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.617570088 +0000 UTC m=+0.024957251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.717598605 +0000 UTC m=+0.124985758 container init e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.724952528 +0000 UTC m=+0.132339661 container start e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:23:46 np0005592157 musing_bartik[281478]: 167 167
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.728354402 +0000 UTC m=+0.135741555 container attach e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:23:46 np0005592157 systemd[1]: libpod-e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408.scope: Deactivated successfully.
Jan 22 09:23:46 np0005592157 conmon[281478]: conmon e96123d61256829b9d43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408.scope/container/memory.events
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.7302848 +0000 UTC m=+0.137671933 container died e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:23:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4304ae9ce7599d796439c70eda2e990faa31c94c15f36e5c9f857d31496db989-merged.mount: Deactivated successfully.
Jan 22 09:23:46 np0005592157 podman[281462]: 2026-01-22 14:23:46.774267694 +0000 UTC m=+0.181654827 container remove e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:23:46 np0005592157 systemd[1]: libpod-conmon-e96123d61256829b9d4337c65c619b0d4060baad2f758086c592742564da0408.scope: Deactivated successfully.
Jan 22 09:23:46 np0005592157 podman[281500]: 2026-01-22 14:23:46.92296744 +0000 UTC m=+0.038960689 container create 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 22 09:23:46 np0005592157 systemd[1]: Started libpod-conmon-55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e.scope.
Jan 22 09:23:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d90a0c82b493df5e3656699f86b14ff74b25db0a976e0f4bde5147100b26ff8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d90a0c82b493df5e3656699f86b14ff74b25db0a976e0f4bde5147100b26ff8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d90a0c82b493df5e3656699f86b14ff74b25db0a976e0f4bde5147100b26ff8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d90a0c82b493df5e3656699f86b14ff74b25db0a976e0f4bde5147100b26ff8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:46 np0005592157 podman[281500]: 2026-01-22 14:23:46.998545369 +0000 UTC m=+0.114538628 container init 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:23:47 np0005592157 podman[281500]: 2026-01-22 14:23:46.905764863 +0000 UTC m=+0.021758132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:47 np0005592157 podman[281500]: 2026-01-22 14:23:47.005662616 +0000 UTC m=+0.121655865 container start 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:23:47 np0005592157 podman[281500]: 2026-01-22 14:23:47.010131437 +0000 UTC m=+0.126124706 container attach 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 22 09:23:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:47 np0005592157 nova_compute[245707]: 2026-01-22 14:23:47.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:47 np0005592157 nova_compute[245707]: 2026-01-22 14:23:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:23:47
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:23:47 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:47 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:47.596 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:47.597 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:47.597 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]: {
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:    "0": [
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:        {
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "devices": [
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "/dev/loop3"
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            ],
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "lv_name": "ceph_lv0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "lv_size": "7511998464",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "name": "ceph_lv0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "tags": {
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.cluster_name": "ceph",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.crush_device_class": "",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.encrypted": "0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.osd_id": "0",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.type": "block",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:                "ceph.vdo": "0"
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            },
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "type": "block",
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:            "vg_name": "ceph_vg0"
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:        }
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]:    ]
Jan 22 09:23:47 np0005592157 amazing_franklin[281516]: }
Jan 22 09:23:47 np0005592157 systemd[1]: libpod-55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e.scope: Deactivated successfully.
Jan 22 09:23:47 np0005592157 podman[281500]: 2026-01-22 14:23:47.792458797 +0000 UTC m=+0.908452046 container died 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:23:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5d90a0c82b493df5e3656699f86b14ff74b25db0a976e0f4bde5147100b26ff8-merged.mount: Deactivated successfully.
Jan 22 09:23:47 np0005592157 podman[281500]: 2026-01-22 14:23:47.859056132 +0000 UTC m=+0.975049381 container remove 55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:23:47 np0005592157 systemd[1]: libpod-conmon-55c5afa87dc05a1683a25c4c67efad10b0b3b7690e73be012e55c04f85f2cc4e.scope: Deactivated successfully.
Jan 22 09:23:47 np0005592157 podman[281527]: 2026-01-22 14:23:47.899719663 +0000 UTC m=+0.070334269 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:23:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.548373939 +0000 UTC m=+0.087858366 container create 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.488849969 +0000 UTC m=+0.028334396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:48 np0005592157 systemd[1]: Started libpod-conmon-2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64.scope.
Jan 22 09:23:48 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.642086238 +0000 UTC m=+0.181570665 container init 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.650709973 +0000 UTC m=+0.190194380 container start 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:23:48 np0005592157 angry_joliot[281715]: 167 167
Jan 22 09:23:48 np0005592157 systemd[1]: libpod-2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64.scope: Deactivated successfully.
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.656436355 +0000 UTC m=+0.195920782 container attach 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.657706027 +0000 UTC m=+0.197190434 container died 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:23:48 np0005592157 conmon[281715]: conmon 2614910296d1afb581b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64.scope/container/memory.events
Jan 22 09:23:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6a39e4dd708c5dbc111199e965142ee1f144645b60f59571ec2e45604aa0284c-merged.mount: Deactivated successfully.
Jan 22 09:23:48 np0005592157 podman[281698]: 2026-01-22 14:23:48.700101051 +0000 UTC m=+0.239585458 container remove 2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:23:48 np0005592157 systemd[1]: libpod-conmon-2614910296d1afb581b3ef3fd8e67abe8db3e8a110c5753deed4a2bac42c2b64.scope: Deactivated successfully.
Jan 22 09:23:48 np0005592157 podman[281739]: 2026-01-22 14:23:48.907981439 +0000 UTC m=+0.052809944 container create b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:23:48 np0005592157 systemd[1]: Started libpod-conmon-b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f.scope.
Jan 22 09:23:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:23:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c36447bcd592fadf15df26f5e17ef174038ddce02b4f35c88f4b61dd30267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c36447bcd592fadf15df26f5e17ef174038ddce02b4f35c88f4b61dd30267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c36447bcd592fadf15df26f5e17ef174038ddce02b4f35c88f4b61dd30267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c36447bcd592fadf15df26f5e17ef174038ddce02b4f35c88f4b61dd30267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:23:48 np0005592157 podman[281739]: 2026-01-22 14:23:48.974151944 +0000 UTC m=+0.118980469 container init b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:23:48 np0005592157 podman[281739]: 2026-01-22 14:23:48.886101465 +0000 UTC m=+0.030930020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:23:48 np0005592157 podman[281739]: 2026-01-22 14:23:48.981516877 +0000 UTC m=+0.126345382 container start b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:23:48 np0005592157 podman[281739]: 2026-01-22 14:23:48.984825619 +0000 UTC m=+0.129654124 container attach b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:23:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:49 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]: {
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:        "osd_id": 0,
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:        "type": "bluestore"
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]:    }
Jan 22 09:23:49 np0005592157 mystifying_curie[281755]: }
Jan 22 09:23:49 np0005592157 systemd[1]: libpod-b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f.scope: Deactivated successfully.
Jan 22 09:23:49 np0005592157 conmon[281755]: conmon b3fbf504a05234f5f396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f.scope/container/memory.events
Jan 22 09:23:49 np0005592157 podman[281739]: 2026-01-22 14:23:49.864022627 +0000 UTC m=+1.008851152 container died b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:23:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dd6c36447bcd592fadf15df26f5e17ef174038ddce02b4f35c88f4b61dd30267-merged.mount: Deactivated successfully.
Jan 22 09:23:49 np0005592157 podman[281739]: 2026-01-22 14:23:49.919332352 +0000 UTC m=+1.064160857 container remove b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:23:49 np0005592157 systemd[1]: libpod-conmon-b3fbf504a05234f5f396192cf02080ea59625ed53ecf51cc94d42b5c535f987f.scope: Deactivated successfully.
Jan 22 09:23:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:23:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:23:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e5059eb9-402b-425e-ba70-915a1d3792ec does not exist
Jan 22 09:23:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7a0d9e99-dada-44dd-a749-041470783554 does not exist
Jan 22 09:23:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 17c3da9c-cc4e-46ed-9ec8-4a527b5c9929 does not exist
Jan 22 09:23:50 np0005592157 nova_compute[245707]: 2026-01-22 14:23:50.241 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:50.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:50 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.242 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.243 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.243 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.294 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.295 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.295 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.295 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.295 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.544 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.544 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.544 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:23:51 np0005592157 nova_compute[245707]: 2026-01-22 14:23:51.545 245711 DEBUG nova.objects.instance [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:23:51 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:52 np0005592157 nova_compute[245707]: 2026-01-22 14:23:52.055 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:23:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:52.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:52 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592157 nova_compute[245707]: 2026-01-22 14:23:53.022 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:23:53 np0005592157 nova_compute[245707]: 2026-01-22 14:23:53.077 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:23:53 np0005592157 nova_compute[245707]: 2026-01-22 14:23:53.078 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:23:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:53 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:54 np0005592157 nova_compute[245707]: 2026-01-22 14:23:54.074 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:54 np0005592157 podman[281844]: 2026-01-22 14:23:54.344684788 +0000 UTC m=+0.075873507 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 09:23:54 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.244 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.246 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.246 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.246 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.267 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:55 np0005592157 nova_compute[245707]: 2026-01-22 14:23:55.268 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:23:55 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:56 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:56.086 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:23:56 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:56.087 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.088 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.298 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.299 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.300 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.300 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.300 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:23:56 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:23:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827271453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.737 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.851 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.852 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.855 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:56 np0005592157 nova_compute[245707]: 2026-01-22 14:23:56.855 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.002 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4495MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.004 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.004 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:23:57.089 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:23:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:57.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.113 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.113 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.114 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.114 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.114 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.114 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance a8700e89-4334-472c-bf9a-9e203a561f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.114 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.115 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:23:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.386 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:23:57 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:23:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804744684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.805 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.811 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:23:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.852 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.907 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:23:57 np0005592157 nova_compute[245707]: 2026-01-22 14:23:57.907 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:23:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:58.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:23:58 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:23:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:59 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:00 np0005592157 nova_compute[245707]: 2026-01-22 14:24:00.268 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:00.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:00 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:00 np0005592157 nova_compute[245707]: 2026-01-22 14:24:00.909 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:01 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:02 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:02 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:03 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010321245986878839 of space, bias 1.0, pg target 3.0963737960636517 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:24:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:24:04 np0005592157 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 09:24:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:04 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:05.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:05 np0005592157 nova_compute[245707]: 2026-01-22 14:24:05.270 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:05 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:24:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:24:06 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:07 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:08 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:10 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:10 np0005592157 nova_compute[245707]: 2026-01-22 14:24:10.272 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:10.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:11 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:12.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:12 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:13 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:14.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:14 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.276 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.278 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.278 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.279 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.308 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:15 np0005592157 nova_compute[245707]: 2026-01-22 14:24:15.309 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:24:15 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:16.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:16 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:17 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:24:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:24:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:24:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:24:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:24:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:18 np0005592157 podman[282033]: 2026-01-22 14:24:18.311058411 +0000 UTC m=+0.047426950 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:24:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:18.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:18 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:19 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 524 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 597 B/s rd, 475 KiB/s wr, 0 op/s
Jan 22 09:24:20 np0005592157 nova_compute[245707]: 2026-01-22 14:24:20.310 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:20 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 09:24:21 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:23 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:23 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 09:24:24 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:24.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:25 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:25 np0005592157 nova_compute[245707]: 2026-01-22 14:24:25.312 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:25 np0005592157 podman[282055]: 2026-01-22 14:24:25.349099561 +0000 UTC m=+0.084036500 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 09:24:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 09:24:26 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:26.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:27 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 09:24:28 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:28 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:29 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 09:24:30 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:30 np0005592157 nova_compute[245707]: 2026-01-22 14:24:30.314 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:31.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:31 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:24:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 11K writes, 39K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 2855 syncs, 3.86 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 651 writes, 1578 keys, 651 commit groups, 1.0 writes per commit group, ingest: 1.41 MB, 0.00 MB/s#012Interval WAL: 651 writes, 274 syncs, 2.38 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:24:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.321791) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872321843, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1138, "num_deletes": 251, "total_data_size": 1436132, "memory_usage": 1464416, "flush_reason": "Manual Compaction"}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872333289, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 922025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45369, "largest_seqno": 46506, "table_properties": {"data_size": 917757, "index_size": 1664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13046, "raw_average_key_size": 21, "raw_value_size": 907787, "raw_average_value_size": 1500, "num_data_blocks": 72, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091798, "oldest_key_time": 1769091798, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 11581 microseconds, and 6391 cpu microseconds.
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333367) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 922025 bytes OK
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333392) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335329) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335344) EVENT_LOG_v1 {"time_micros": 1769091872335339, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1430831, prev total WAL file size 1430831, number of live WAL files 2.
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(900KB)], [98(10MB)]
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872335894, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 12095574, "oldest_snapshot_seqno": -1}
Jan 22 09:24:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 9093 keys, 8667330 bytes, temperature: kUnknown
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872390852, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8667330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8616850, "index_size": 26624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 244664, "raw_average_key_size": 26, "raw_value_size": 8461071, "raw_average_value_size": 930, "num_data_blocks": 1006, "num_entries": 9093, "num_filter_entries": 9093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.391150) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8667330 bytes
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.6 rd, 157.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(22.5) write-amplify(9.4) OK, records in: 9577, records dropped: 484 output_compression: NoCompression
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393355) EVENT_LOG_v1 {"time_micros": 1769091872393344, "job": 58, "event": "compaction_finished", "compaction_time_micros": 55091, "compaction_time_cpu_micros": 22572, "output_level": 6, "num_output_files": 1, "total_output_size": 8667330, "num_input_records": 9577, "num_output_records": 9093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872393766, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872396563, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.396670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.396678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.396680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.396681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:24:32.396682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:33.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:33 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:33 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:24:34 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:35.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:35 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:35 np0005592157 nova_compute[245707]: 2026-01-22 14:24:35.316 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:24:36 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:37 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 09:24:38 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:38 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:39 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.317 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.319 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.319 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.319 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.320 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:24:40 np0005592157 nova_compute[245707]: 2026-01-22 14:24:40.321 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:40 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:41 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:42 np0005592157 nova_compute[245707]: 2026-01-22 14:24:42.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:42 np0005592157 nova_compute[245707]: 2026-01-22 14:24:42.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:24:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:42 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:43 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:43 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:44.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:44 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:24:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:45.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:45 np0005592157 nova_compute[245707]: 2026-01-22 14:24:45.320 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:45 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:46 np0005592157 nova_compute[245707]: 2026-01-22 14:24:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:46.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:46 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:24:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:47 np0005592157 nova_compute[245707]: 2026-01-22 14:24:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:47 np0005592157 nova_compute[245707]: 2026-01-22 14:24:47.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:24:47
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', '.rgw.root']
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:24:47 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:24:47.597 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:24:47.598 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:24:47.598 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:48.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:48 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:49.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:49 np0005592157 nova_compute[245707]: 2026-01-22 14:24:49.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:49 np0005592157 podman[282143]: 2026-01-22 14:24:49.30474373 +0000 UTC m=+0.047969864 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:24:49 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:50 np0005592157 nova_compute[245707]: 2026-01-22 14:24:50.321 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:50 np0005592157 nova_compute[245707]: 2026-01-22 14:24:50.323 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:24:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:50.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:24:50 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:51 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:24:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:24:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.277 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2283eaba-5d42-4927-ae67-e9305abc03d5 does not exist
Jan 22 09:24:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d5fb11ae-e857-489d-a6e6-901bb06c88ac does not exist
Jan 22 09:24:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aced4ee9-0dd3-4e9f-8322-2925a626c802 does not exist
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:24:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.781 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.782 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.782 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:24:52 np0005592157 nova_compute[245707]: 2026-01-22 14:24:52.782 245711 DEBUG nova.objects.instance [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.073236006 +0000 UTC m=+0.035537454 container create eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:24:53 np0005592157 nova_compute[245707]: 2026-01-22 14:24:53.074 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:24:53 np0005592157 systemd[1]: Started libpod-conmon-eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209.scope.
Jan 22 09:24:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.154380804 +0000 UTC m=+0.116682272 container init eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.057458774 +0000 UTC m=+0.019760242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:53.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.162540787 +0000 UTC m=+0.124842235 container start eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:24:53 np0005592157 stoic_williams[282453]: 167 167
Jan 22 09:24:53 np0005592157 systemd[1]: libpod-eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209.scope: Deactivated successfully.
Jan 22 09:24:53 np0005592157 conmon[282453]: conmon eb2bbd3e0835c6c92325 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209.scope/container/memory.events
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.169666604 +0000 UTC m=+0.131968072 container attach eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.169962941 +0000 UTC m=+0.132264379 container died eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:24:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-34615c3c312162161814e8df45a3412774a874439356616d94ba02452ce54ec5-merged.mount: Deactivated successfully.
Jan 22 09:24:53 np0005592157 podman[282437]: 2026-01-22 14:24:53.223541233 +0000 UTC m=+0.185842681 container remove eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:24:53 np0005592157 systemd[1]: libpod-conmon-eb2bbd3e0835c6c92325287c3b4b964dd5b231c3a1e63b706d378c6f2519c209.scope: Deactivated successfully.
Jan 22 09:24:53 np0005592157 podman[282476]: 2026-01-22 14:24:53.377957822 +0000 UTC m=+0.040347964 container create 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:24:53 np0005592157 systemd[1]: Started libpod-conmon-7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad.scope.
Jan 22 09:24:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:53 np0005592157 podman[282476]: 2026-01-22 14:24:53.361231286 +0000 UTC m=+0.023621448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:53 np0005592157 podman[282476]: 2026-01-22 14:24:53.466176115 +0000 UTC m=+0.128566277 container init 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:24:53 np0005592157 podman[282476]: 2026-01-22 14:24:53.47162835 +0000 UTC m=+0.134018492 container start 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:24:53 np0005592157 podman[282476]: 2026-01-22 14:24:53.475345953 +0000 UTC m=+0.137736095 container attach 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:24:53 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:24:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:24:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:54 np0005592157 nova_compute[245707]: 2026-01-22 14:24:54.062 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:24:54 np0005592157 nova_compute[245707]: 2026-01-22 14:24:54.088 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:24:54 np0005592157 nova_compute[245707]: 2026-01-22 14:24:54.089 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:24:54 np0005592157 dazzling_fermi[282492]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:24:54 np0005592157 dazzling_fermi[282492]: --> relative data size: 1.0
Jan 22 09:24:54 np0005592157 dazzling_fermi[282492]: --> All data devices are unavailable
Jan 22 09:24:54 np0005592157 systemd[1]: libpod-7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad.scope: Deactivated successfully.
Jan 22 09:24:54 np0005592157 podman[282476]: 2026-01-22 14:24:54.26966323 +0000 UTC m=+0.932053382 container died 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:24:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e0ebac074d9a6666404b8353323399b424cd85f4bf1a56666a498d3d9e37bd31-merged.mount: Deactivated successfully.
Jan 22 09:24:54 np0005592157 podman[282476]: 2026-01-22 14:24:54.3308167 +0000 UTC m=+0.993206842 container remove 7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:24:54 np0005592157 systemd[1]: libpod-conmon-7e46541be2925558cd588b1cb7643cdbdac84f2b5955e15093cd4503ee0243ad.scope: Deactivated successfully.
Jan 22 09:24:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:24:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:24:54 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.877202374 +0000 UTC m=+0.036149270 container create 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:24:54 np0005592157 systemd[1]: Started libpod-conmon-947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307.scope.
Jan 22 09:24:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.937035311 +0000 UTC m=+0.095982227 container init 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.944283672 +0000 UTC m=+0.103230568 container start 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:24:54 np0005592157 beautiful_mayer[282676]: 167 167
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.947740828 +0000 UTC m=+0.106687754 container attach 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 09:24:54 np0005592157 systemd[1]: libpod-947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307.scope: Deactivated successfully.
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.948800064 +0000 UTC m=+0.107746960 container died 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.861272938 +0000 UTC m=+0.020219854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ff6e05d4f3d2998d43e1d72242f7a23434112d847fcc411c601e993a348edf1c-merged.mount: Deactivated successfully.
Jan 22 09:24:54 np0005592157 podman[282660]: 2026-01-22 14:24:54.982261776 +0000 UTC m=+0.141208672 container remove 947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:24:54 np0005592157 systemd[1]: libpod-conmon-947c6cb07bffafbe672a5dbe78c42994ef4cebcdb2542699982ad5fd54318307.scope: Deactivated successfully.
Jan 22 09:24:55 np0005592157 nova_compute[245707]: 2026-01-22 14:24:55.084 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:55 np0005592157 podman[282700]: 2026-01-22 14:24:55.1348918 +0000 UTC m=+0.045821380 container create dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:24:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:55 np0005592157 systemd[1]: Started libpod-conmon-dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef.scope.
Jan 22 09:24:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f99a39281670afc4409305f442dd1abaeedda5f1970eccb55e20a1fa04da59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f99a39281670afc4409305f442dd1abaeedda5f1970eccb55e20a1fa04da59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f99a39281670afc4409305f442dd1abaeedda5f1970eccb55e20a1fa04da59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f99a39281670afc4409305f442dd1abaeedda5f1970eccb55e20a1fa04da59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:55 np0005592157 podman[282700]: 2026-01-22 14:24:55.114839562 +0000 UTC m=+0.025769192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:55 np0005592157 podman[282700]: 2026-01-22 14:24:55.212965981 +0000 UTC m=+0.123895591 container init dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:24:55 np0005592157 podman[282700]: 2026-01-22 14:24:55.220393916 +0000 UTC m=+0.131323496 container start dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:24:55 np0005592157 podman[282700]: 2026-01-22 14:24:55.223972735 +0000 UTC m=+0.134902315 container attach dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:24:55 np0005592157 nova_compute[245707]: 2026-01-22 14:24:55.323 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:55 np0005592157 ceph-mon[74359]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:55 np0005592157 charming_mayer[282716]: {
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:    "0": [
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:        {
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "devices": [
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "/dev/loop3"
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            ],
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "lv_name": "ceph_lv0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "lv_size": "7511998464",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "name": "ceph_lv0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "tags": {
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.cluster_name": "ceph",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.crush_device_class": "",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.encrypted": "0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.osd_id": "0",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.type": "block",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:                "ceph.vdo": "0"
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            },
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "type": "block",
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:            "vg_name": "ceph_vg0"
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:        }
Jan 22 09:24:55 np0005592157 charming_mayer[282716]:    ]
Jan 22 09:24:55 np0005592157 charming_mayer[282716]: }
Jan 22 09:24:56 np0005592157 systemd[1]: libpod-dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef.scope: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282700]: 2026-01-22 14:24:56.001971996 +0000 UTC m=+0.912901576 container died dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:24:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-91f99a39281670afc4409305f442dd1abaeedda5f1970eccb55e20a1fa04da59-merged.mount: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282700]: 2026-01-22 14:24:56.05765277 +0000 UTC m=+0.968582350 container remove dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:24:56 np0005592157 systemd[1]: libpod-conmon-dec6521241fe947afda8a4d80096b5e5c4fec0c24bf0bd64bb11c2bf77f893ef.scope: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282728]: 2026-01-22 14:24:56.160495616 +0000 UTC m=+0.126522935 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:24:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:56.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.641422402 +0000 UTC m=+0.041463622 container create 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:24:56 np0005592157 systemd[1]: Started libpod-conmon-1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572.scope.
Jan 22 09:24:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.622493881 +0000 UTC m=+0.022535121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.721725238 +0000 UTC m=+0.121766478 container init 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.730010964 +0000 UTC m=+0.130052184 container start 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.734157747 +0000 UTC m=+0.134198997 container attach 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:24:56 np0005592157 quizzical_cannon[282970]: 167 167
Jan 22 09:24:56 np0005592157 systemd[1]: libpod-1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572.scope: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.735694515 +0000 UTC m=+0.135735735 container died 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:24:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d2913369c256a99e522d6383afdd1dfc5a89db70c87588accb96cf7302b5b007-merged.mount: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282954]: 2026-01-22 14:24:56.772286785 +0000 UTC m=+0.172328005 container remove 1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:24:56 np0005592157 systemd[1]: libpod-conmon-1ab630b6baa32c4b0afa71875a64349c10cfb95c897026acac4db2d7a3cdf572.scope: Deactivated successfully.
Jan 22 09:24:56 np0005592157 podman[282994]: 2026-01-22 14:24:56.929130664 +0000 UTC m=+0.044489437 container create ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:24:56 np0005592157 systemd[1]: Started libpod-conmon-ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0.scope.
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:56.908970723 +0000 UTC m=+0.024329486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:24:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:24:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8620077da4a676501d267f7a36ef80cdc65e7568ae3e851c6f7265370208928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8620077da4a676501d267f7a36ef80cdc65e7568ae3e851c6f7265370208928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8620077da4a676501d267f7a36ef80cdc65e7568ae3e851c6f7265370208928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8620077da4a676501d267f7a36ef80cdc65e7568ae3e851c6f7265370208928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:57.036363321 +0000 UTC m=+0.151722084 container init ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:57.042958184 +0000 UTC m=+0.158316937 container start ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:57.046913523 +0000 UTC m=+0.162272296 container attach ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:24:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:57.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]: {
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:        "osd_id": 0,
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:        "type": "bluestore"
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]:    }
Jan 22 09:24:57 np0005592157 modest_rosalind[283011]: }
Jan 22 09:24:57 np0005592157 systemd[1]: libpod-ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0.scope: Deactivated successfully.
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:57.853054864 +0000 UTC m=+0.968413597 container died ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:24:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:24:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e8620077da4a676501d267f7a36ef80cdc65e7568ae3e851c6f7265370208928-merged.mount: Deactivated successfully.
Jan 22 09:24:57 np0005592157 podman[282994]: 2026-01-22 14:24:57.919909116 +0000 UTC m=+1.035267859 container remove ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rosalind, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:24:57 np0005592157 systemd[1]: libpod-conmon-ae997c705eb541ac7e5bebcacc138477dfcda6c230b28ce789c1fe8db37141d0.scope: Deactivated successfully.
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:24:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b86a061a-9988-4980-90f1-cc4270663e27 does not exist
Jan 22 09:24:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3313671a-644d-4a61-8d21-ac2f14efc1a2 does not exist
Jan 22 09:24:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev de2d9226-3354-4025-b516-5e6d9db5b825 does not exist
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.265 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.266 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.266 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.266 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.267 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:58 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:24:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660272869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.693 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.757 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.757 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.761 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.761 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.897 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.897 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4449MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.898 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:58 np0005592157 nova_compute[245707]: 2026-01-22 14:24:58.898 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.001 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.002 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.002 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance a8700e89-4334-472c-bf9a-9e203a561f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.003 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.134 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:24:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:59.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:24:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886356108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.605 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.610 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:24:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.650 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.652 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:24:59 np0005592157 nova_compute[245707]: 2026-01-22 14:24:59.652 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:00 np0005592157 nova_compute[245707]: 2026-01-22 14:25:00.325 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:00 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:01 np0005592157 nova_compute[245707]: 2026-01-22 14:25:01.653 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:25:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:02.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:25:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:03 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01131016773683231 of space, bias 1.0, pg target 3.393050321049693 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:25:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:25:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:05 np0005592157 nova_compute[245707]: 2026-01-22 14:25:05.327 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:10 np0005592157 nova_compute[245707]: 2026-01-22 14:25:10.329 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:14.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:15.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.331 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.333 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.333 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.333 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.372 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:15 np0005592157 nova_compute[245707]: 2026-01-22 14:25:15.373 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:17 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:17.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:25:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:25:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:19 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:19.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:20 np0005592157 podman[283200]: 2026-01-22 14:25:20.332948167 +0000 UTC m=+0.055513031 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:25:20 np0005592157 nova_compute[245707]: 2026-01-22 14:25:20.374 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:20 np0005592157 nova_compute[245707]: 2026-01-22 14:25:20.375 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:23.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:23 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:25.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:25 np0005592157 nova_compute[245707]: 2026-01-22 14:25:25.376 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:26 np0005592157 podman[283224]: 2026-01-22 14:25:26.368046162 +0000 UTC m=+0.101761161 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:25:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:27.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:27 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:28 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:29.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:29 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.378 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.380 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.380 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.380 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.426 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:30 np0005592157 nova_compute[245707]: 2026-01-22 14:25:30.427 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:31.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:31 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:32 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:32 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:33 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:34 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:25:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:25:35 np0005592157 nova_compute[245707]: 2026-01-22 14:25:35.428 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:36.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:37.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:37 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:38 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:38 np0005592157 nova_compute[245707]: 2026-01-22 14:25:38.977 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:39 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:40 np0005592157 nova_compute[245707]: 2026-01-22 14:25:40.429 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:41.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:41 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:43 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:43 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:25:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:25:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:44 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:44 np0005592157 nova_compute[245707]: 2026-01-22 14:25:44.283 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:44 np0005592157 nova_compute[245707]: 2026-01-22 14:25:44.283 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:25:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:45 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.432 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.434 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.434 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.434 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.464 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:45 np0005592157 nova_compute[245707]: 2026-01-22 14:25:45.464 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:25:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:46.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:25:47 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:47 np0005592157 nova_compute[245707]: 2026-01-22 14:25:47.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:47 np0005592157 nova_compute[245707]: 2026-01-22 14:25:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:47 np0005592157 nova_compute[245707]: 2026-01-22 14:25:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:25:47
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta']
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:25:47.598 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:25:47.599 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:25:47.599 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:48 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:48 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:48.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:49.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:50 np0005592157 nova_compute[245707]: 2026-01-22 14:25:50.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:50 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:50 np0005592157 nova_compute[245707]: 2026-01-22 14:25:50.465 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:50 np0005592157 nova_compute[245707]: 2026-01-22 14:25:50.466 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:51 np0005592157 podman[283312]: 2026-01-22 14:25:51.315542622 +0000 UTC m=+0.050967408 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:25:51 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:52 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:53 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.274 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.275 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.275 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.275 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.275 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:54 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.551 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.552 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.552 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:25:54 np0005592157 nova_compute[245707]: 2026-01-22 14:25:54.552 245711 DEBUG nova.objects.instance [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:25:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:55.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:55 np0005592157 nova_compute[245707]: 2026-01-22 14:25:55.467 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:25:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.423 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:25:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:56 np0005592157 podman[283358]: 2026-01-22 14:25:56.6207177 +0000 UTC m=+0.082008300 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.743 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.777 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.778 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.778 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:56 np0005592157 nova_compute[245707]: 2026-01-22 14:25:56.778 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:25:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:57.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:57 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:57 np0005592157 nova_compute[245707]: 2026-01-22 14:25:57.792 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:57 np0005592157 nova_compute[245707]: 2026-01-22 14:25:57.793 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.268 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.269 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.269 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.269 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.269 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:25:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:25:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199945618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.721 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.804 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.805 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.809 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.810 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.948 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.949 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4509MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.949 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:58 np0005592157 nova_compute[245707]: 2026-01-22 14:25:58.950 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:25:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:25:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:59.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance a8700e89-4334-472c-bf9a-9e203a561f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.468 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.469 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.558 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing inventories for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.653 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating ProviderTree inventory for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.653 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Updating inventory in ProviderTree for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.693 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing aggregate associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:25:59 np0005592157 nova_compute[245707]: 2026-01-22 14:25:59.724 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Refreshing trait associations for resource provider 25bab4de-b201-44ab-9630-4373ed73bbb5, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev be00a971-45c1-4614-962c-1c6cfbc4c324 does not exist
Jan 22 09:25:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1d2816be-8f17-4750-8490-80ff36c8b0cd does not exist
Jan 22 09:25:59 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9913ba7a-0f1e-4a2f-802c-644d5716e37f does not exist
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.827685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959827751, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1308, "num_deletes": 251, "total_data_size": 1683388, "memory_usage": 1721312, "flush_reason": "Manual Compaction"}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959842359, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1644723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46507, "largest_seqno": 47814, "table_properties": {"data_size": 1639072, "index_size": 2791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14729, "raw_average_key_size": 20, "raw_value_size": 1626619, "raw_average_value_size": 2303, "num_data_blocks": 120, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091873, "oldest_key_time": 1769091873, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 14740 microseconds, and 5832 cpu microseconds.
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.842427) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1644723 bytes OK
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.842446) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844742) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844764) EVENT_LOG_v1 {"time_micros": 1769091959844757, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1677473, prev total WAL file size 1677473, number of live WAL files 2.
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.845810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1606KB)], [101(8464KB)]
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959845874, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10312053, "oldest_snapshot_seqno": -1}
Jan 22 09:25:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 9282 keys, 8613948 bytes, temperature: kUnknown
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959902454, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8613948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8562557, "index_size": 27087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 249823, "raw_average_key_size": 26, "raw_value_size": 8403665, "raw_average_value_size": 905, "num_data_blocks": 1020, "num_entries": 9282, "num_filter_entries": 9282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.902692) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8613948 bytes
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.904159) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 9799, records dropped: 517 output_compression: NoCompression
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.904175) EVENT_LOG_v1 {"time_micros": 1769091959904167, "job": 60, "event": "compaction_finished", "compaction_time_micros": 56656, "compaction_time_cpu_micros": 21679, "output_level": 6, "num_output_files": 1, "total_output_size": 8613948, "num_input_records": 9799, "num_output_records": 9282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959904587, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959906957, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.845745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.907013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.907016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.907018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.907019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:25:59.907020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.087 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.332248836 +0000 UTC m=+0.039910203 container create a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:26:00 np0005592157 systemd[1]: Started libpod-conmon-a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b.scope.
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.317038268 +0000 UTC m=+0.024699655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.427226348 +0000 UTC m=+0.134887735 container init a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.435613686 +0000 UTC m=+0.143275073 container start a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.440214421 +0000 UTC m=+0.147875848 container attach a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:00 np0005592157 intelligent_hopper[283862]: 167 167
Jan 22 09:26:00 np0005592157 systemd[1]: libpod-a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b.scope: Deactivated successfully.
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.441952614 +0000 UTC m=+0.149613981 container died a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:26:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5a27ba4b38deca3760c257b71e6f523e9b9cb821ad522e173f6b4d82f72c5cc0-merged.mount: Deactivated successfully.
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.469 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226630284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:26:00 np0005592157 podman[283846]: 2026-01-22 14:26:00.494048759 +0000 UTC m=+0.201710126 container remove a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.510 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:26:00 np0005592157 systemd[1]: libpod-conmon-a3578d022d63a74dfbdff7ba20ecc4f92cb8b5a82ae90d1f35edd6d3f1591b0b.scope: Deactivated successfully.
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.516 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.554 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.555 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:26:00 np0005592157 nova_compute[245707]: 2026-01-22 14:26:00.555 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:26:00 np0005592157 podman[283888]: 2026-01-22 14:26:00.657734858 +0000 UTC m=+0.036469578 container create 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:00 np0005592157 systemd[1]: Started libpod-conmon-58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753.scope.
Jan 22 09:26:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:00 np0005592157 podman[283888]: 2026-01-22 14:26:00.642397306 +0000 UTC m=+0.021132046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:00 np0005592157 podman[283888]: 2026-01-22 14:26:00.743213993 +0000 UTC m=+0.121948733 container init 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:26:00 np0005592157 podman[283888]: 2026-01-22 14:26:00.749279524 +0000 UTC m=+0.128014244 container start 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:26:00 np0005592157 podman[283888]: 2026-01-22 14:26:00.753063308 +0000 UTC m=+0.131798048 container attach 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:26:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:01 np0005592157 sharp_chaum[283905]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:26:01 np0005592157 sharp_chaum[283905]: --> relative data size: 1.0
Jan 22 09:26:01 np0005592157 sharp_chaum[283905]: --> All data devices are unavailable
Jan 22 09:26:01 np0005592157 systemd[1]: libpod-58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753.scope: Deactivated successfully.
Jan 22 09:26:01 np0005592157 conmon[283905]: conmon 58bdcf77729bb1307447 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753.scope/container/memory.events
Jan 22 09:26:01 np0005592157 podman[283888]: 2026-01-22 14:26:01.592234701 +0000 UTC m=+0.970969421 container died 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:26:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0ed11b064e75c8d92594cb30f5c9fa04d5c4e280bb5515611769a3a12aad4bb4-merged.mount: Deactivated successfully.
Jan 22 09:26:01 np0005592157 podman[283888]: 2026-01-22 14:26:01.650594302 +0000 UTC m=+1.029329022 container remove 58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:26:01 np0005592157 systemd[1]: libpod-conmon-58bdcf77729bb130744752bcb4cb095c86b7c9bc254a7de306c21762bdab6753.scope: Deactivated successfully.
Jan 22 09:26:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.204983686 +0000 UTC m=+0.035506554 container create 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:26:02 np0005592157 systemd[1]: Started libpod-conmon-35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705.scope.
Jan 22 09:26:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.280305548 +0000 UTC m=+0.110828436 container init 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.1902925 +0000 UTC m=+0.020815398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.286727908 +0000 UTC m=+0.117250776 container start 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.289963679 +0000 UTC m=+0.120486557 container attach 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:02 np0005592157 mystifying_wilson[284089]: 167 167
Jan 22 09:26:02 np0005592157 systemd[1]: libpod-35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705.scope: Deactivated successfully.
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.291299102 +0000 UTC m=+0.121821980 container died 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:26:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ca7c165e0587c813d7f5cebf844c93aeee31d34d57263b1b6ffb0168a182a370-merged.mount: Deactivated successfully.
Jan 22 09:26:02 np0005592157 podman[284073]: 2026-01-22 14:26:02.330343832 +0000 UTC m=+0.160866710 container remove 35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:26:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:02 np0005592157 systemd[1]: libpod-conmon-35f59810104327bfe40a0ba095342a56a7958e7d5921ec6d9ab9b9ea0e546705.scope: Deactivated successfully.
Jan 22 09:26:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:02 np0005592157 podman[284113]: 2026-01-22 14:26:02.486449954 +0000 UTC m=+0.043415191 container create 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:26:02 np0005592157 systemd[1]: Started libpod-conmon-0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af.scope.
Jan 22 09:26:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890567d849a1fe4221ccb1ff15bc6d0c34b3b6d364d1bc15c2bf17cd82df3de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890567d849a1fe4221ccb1ff15bc6d0c34b3b6d364d1bc15c2bf17cd82df3de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890567d849a1fe4221ccb1ff15bc6d0c34b3b6d364d1bc15c2bf17cd82df3de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890567d849a1fe4221ccb1ff15bc6d0c34b3b6d364d1bc15c2bf17cd82df3de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:02 np0005592157 podman[284113]: 2026-01-22 14:26:02.558146506 +0000 UTC m=+0.115111763 container init 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:26:02 np0005592157 podman[284113]: 2026-01-22 14:26:02.468287392 +0000 UTC m=+0.025252649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:02 np0005592157 podman[284113]: 2026-01-22 14:26:02.564974376 +0000 UTC m=+0.121939613 container start 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:26:02 np0005592157 podman[284113]: 2026-01-22 14:26:02.568507404 +0000 UTC m=+0.125472661 container attach 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:26:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]: {
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:    "0": [
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:        {
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "devices": [
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "/dev/loop3"
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            ],
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "lv_name": "ceph_lv0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "lv_size": "7511998464",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "name": "ceph_lv0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "tags": {
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.cluster_name": "ceph",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.crush_device_class": "",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.encrypted": "0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.osd_id": "0",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.type": "block",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:                "ceph.vdo": "0"
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            },
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "type": "block",
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:            "vg_name": "ceph_vg0"
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:        }
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]:    ]
Jan 22 09:26:03 np0005592157 vibrant_mendeleev[284129]: }
Jan 22 09:26:03 np0005592157 systemd[1]: libpod-0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af.scope: Deactivated successfully.
Jan 22 09:26:03 np0005592157 podman[284113]: 2026-01-22 14:26:03.291999631 +0000 UTC m=+0.848964908 container died 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:26:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-890567d849a1fe4221ccb1ff15bc6d0c34b3b6d364d1bc15c2bf17cd82df3de6-merged.mount: Deactivated successfully.
Jan 22 09:26:03 np0005592157 podman[284113]: 2026-01-22 14:26:03.358121505 +0000 UTC m=+0.915086742 container remove 0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:26:03 np0005592157 systemd[1]: libpod-conmon-0db00dd28453fe7a57130503f8e2cff7bcdca74737e88adaf8a9cc784e7837af.scope: Deactivated successfully.
Jan 22 09:26:03 np0005592157 nova_compute[245707]: 2026-01-22 14:26:03.556 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:03 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:03 np0005592157 podman[284292]: 2026-01-22 14:26:03.924188369 +0000 UTC m=+0.038959880 container create ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:26:03 np0005592157 systemd[1]: Started libpod-conmon-ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435.scope.
Jan 22 09:26:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:03 np0005592157 podman[284292]: 2026-01-22 14:26:03.991885492 +0000 UTC m=+0.106657023 container init ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:26:03 np0005592157 podman[284292]: 2026-01-22 14:26:03.997894302 +0000 UTC m=+0.112665813 container start ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 22 09:26:04 np0005592157 admiring_tu[284308]: 167 167
Jan 22 09:26:04 np0005592157 systemd[1]: libpod-ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435.scope: Deactivated successfully.
Jan 22 09:26:04 np0005592157 podman[284292]: 2026-01-22 14:26:03.906975181 +0000 UTC m=+0.021746722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:04 np0005592157 podman[284292]: 2026-01-22 14:26:04.001833689 +0000 UTC m=+0.116605220 container attach ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:26:04 np0005592157 podman[284292]: 2026-01-22 14:26:04.002298251 +0000 UTC m=+0.117069762 container died ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b90bd294e574b9fd7154e6dd5ad4af2a0838da7815671d2f0dccd4af37fe5702-merged.mount: Deactivated successfully.
Jan 22 09:26:04 np0005592157 podman[284292]: 2026-01-22 14:26:04.035798644 +0000 UTC m=+0.150570155 container remove ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:26:04 np0005592157 systemd[1]: libpod-conmon-ee1b3400c7383a7b8c4b0e9372bc5a852b0d8551f4b7e2837c1a85f08790b435.scope: Deactivated successfully.
Jan 22 09:26:04 np0005592157 podman[284332]: 2026-01-22 14:26:04.1901214 +0000 UTC m=+0.045333548 container create b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:26:04 np0005592157 systemd[1]: Started libpod-conmon-b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38.scope.
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01131016773683231 of space, bias 1.0, pg target 3.393050321049693 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:26:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:26:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:26:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552a2cbdac3731f783bb96fe250d1bf2b80ae00d0f58942d6e7dada634e02422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552a2cbdac3731f783bb96fe250d1bf2b80ae00d0f58942d6e7dada634e02422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:04 np0005592157 podman[284332]: 2026-01-22 14:26:04.173030785 +0000 UTC m=+0.028242923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:26:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552a2cbdac3731f783bb96fe250d1bf2b80ae00d0f58942d6e7dada634e02422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552a2cbdac3731f783bb96fe250d1bf2b80ae00d0f58942d6e7dada634e02422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:26:04 np0005592157 podman[284332]: 2026-01-22 14:26:04.282074906 +0000 UTC m=+0.137287064 container init b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:26:04 np0005592157 podman[284332]: 2026-01-22 14:26:04.299475029 +0000 UTC m=+0.154687167 container start b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:26:04 np0005592157 podman[284332]: 2026-01-22 14:26:04.306743299 +0000 UTC m=+0.161955437 container attach b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:26:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]: {
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:        "osd_id": 0,
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:        "type": "bluestore"
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]:    }
Jan 22 09:26:05 np0005592157 quizzical_jennings[284348]: }
Jan 22 09:26:05 np0005592157 systemd[1]: libpod-b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38.scope: Deactivated successfully.
Jan 22 09:26:05 np0005592157 podman[284332]: 2026-01-22 14:26:05.171091439 +0000 UTC m=+1.026303607 container died b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:26:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-552a2cbdac3731f783bb96fe250d1bf2b80ae00d0f58942d6e7dada634e02422-merged.mount: Deactivated successfully.
Jan 22 09:26:05 np0005592157 podman[284332]: 2026-01-22 14:26:05.241435188 +0000 UTC m=+1.096647326 container remove b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:26:05 np0005592157 nova_compute[245707]: 2026-01-22 14:26:05.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:05 np0005592157 systemd[1]: libpod-conmon-b0ea9107da62f6f7ac5f7dd94092899a981d9e07dc0da5bb1db63af42385cb38.scope: Deactivated successfully.
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f18b36d6-36a4-4bb9-b272-6b50ab777081 does not exist
Jan 22 09:26:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eb144c36-a33b-4848-9bea-d10845b32ea2 does not exist
Jan 22 09:26:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8c660f74-4236-4f5c-b08f-465275245321 does not exist
Jan 22 09:26:05 np0005592157 nova_compute[245707]: 2026-01-22 14:26:05.473 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:07 np0005592157 nova_compute[245707]: 2026-01-22 14:26:07.265 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:07 np0005592157 nova_compute[245707]: 2026-01-22 14:26:07.266 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:26:07 np0005592157 nova_compute[245707]: 2026-01-22 14:26:07.291 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:26:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:09.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:10.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:10 np0005592157 nova_compute[245707]: 2026-01-22 14:26:10.474 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:11 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:15 np0005592157 nova_compute[245707]: 2026-01-22 14:26:15.476 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:18 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:26:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:26:19 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:20 np0005592157 nova_compute[245707]: 2026-01-22 14:26:20.479 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:20 np0005592157 nova_compute[245707]: 2026-01-22 14:26:20.977 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.077 245711 WARNING nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] While synchronizing instance power states, found 6 instances in the database and 2 instances on the hypervisor.#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.078 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Sync already in progress for 18becd7f-5901-49d8-87eb-548e630001aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.078 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Sync already in progress for 1089392f-9bda-4904-9370-95fc2c3dd7c2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.079 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Sync already in progress for b8bec212-84ad-47fd-9608-2cc1999da6c4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.079 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.079 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid df283133-db55-4a7e-a651-12dd25bae88e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.079 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Triggering sync for uuid a8700e89-4334-472c-bf9a-9e203a561f43 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.080 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.080 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "df283133-db55-4a7e-a651-12dd25bae88e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.080 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "a8700e89-4334-472c-bf9a-9e203a561f43" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.081 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:21 np0005592157 nova_compute[245707]: 2026-01-22 14:26:21.127 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:26:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:26:21 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:22 np0005592157 podman[284489]: 2026-01-22 14:26:22.328406166 +0000 UTC m=+0.061610653 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:26:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:23.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:25 np0005592157 nova_compute[245707]: 2026-01-22 14:26:25.480 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:27 np0005592157 podman[284511]: 2026-01-22 14:26:27.37084294 +0000 UTC m=+0.107328129 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:26:27 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:29 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.482 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.484 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.485 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.485 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:26:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.520 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:30 np0005592157 nova_compute[245707]: 2026-01-22 14:26:30.521 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:26:31 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:32 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:33 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:33 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:33.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:34 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:26:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:26:35 np0005592157 nova_compute[245707]: 2026-01-22 14:26:35.522 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:35 np0005592157 nova_compute[245707]: 2026-01-22 14:26:35.524 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:37.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:38 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:38 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:38.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:39 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:40 np0005592157 nova_compute[245707]: 2026-01-22 14:26:40.524 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:40 np0005592157 nova_compute[245707]: 2026-01-22 14:26:40.527 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:40.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:41.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:41 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:42 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:26:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:26:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:43.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:43 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:43 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:44 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:44.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:45.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:45 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:45 np0005592157 nova_compute[245707]: 2026-01-22 14:26:45.526 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:46 np0005592157 nova_compute[245707]: 2026-01-22 14:26:46.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:46 np0005592157 nova_compute[245707]: 2026-01-22 14:26:46.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:26:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:26:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:46.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:47 np0005592157 nova_compute[245707]: 2026-01-22 14:26:47.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:47.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:26:47
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:26:47.599 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:26:47.599 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:26:47.600 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:47 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:47 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:48 np0005592157 nova_compute[245707]: 2026-01-22 14:26:48.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:48 np0005592157 nova_compute[245707]: 2026-01-22 14:26:48.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:48.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:48 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:48 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:49.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:50 np0005592157 nova_compute[245707]: 2026-01-22 14:26:50.528 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:50.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:51 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:51 np0005592157 nova_compute[245707]: 2026-01-22 14:26:51.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:51.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:52 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:26:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:53.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:26:53 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:53 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:53 np0005592157 podman[284603]: 2026-01-22 14:26:53.328787169 +0000 UTC m=+0.060992668 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:26:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.267 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:54 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.949 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.950 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.950 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:26:54 np0005592157 nova_compute[245707]: 2026-01-22 14:26:54.950 245711 DEBUG nova.objects.instance [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:26:55 np0005592157 nova_compute[245707]: 2026-01-22 14:26:55.126 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:26:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:55.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:55 np0005592157 nova_compute[245707]: 2026-01-22 14:26:55.530 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:26:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:56 np0005592157 nova_compute[245707]: 2026-01-22 14:26:56.088 245711 DEBUG nova.network.neutron [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:26:56 np0005592157 nova_compute[245707]: 2026-01-22 14:26:56.103 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:26:56 np0005592157 nova_compute[245707]: 2026-01-22 14:26:56.104 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:26:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:26:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:56.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:26:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:57 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:26:58 np0005592157 nova_compute[245707]: 2026-01-22 14:26:58.099 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:58 np0005592157 podman[284678]: 2026-01-22 14:26:58.383025919 +0000 UTC m=+0.113421521 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:26:58 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:58.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:26:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:59.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.426 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.427 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.427 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.427 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.427 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:26:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:26:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362540640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.879 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:26:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.960 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.961 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.964 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:59 np0005592157 nova_compute[245707]: 2026-01-22 14:26:59.964 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.123 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.125 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4470MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.125 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.126 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.222 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.222 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.223 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.223 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.223 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.223 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance a8700e89-4334-472c-bf9a-9e203a561f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.224 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.224 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.354 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:27:00 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.532 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.533 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.534 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.534 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.535 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.537 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:00.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:27:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394215233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.768 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.774 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.799 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.801 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:27:00 np0005592157 nova_compute[245707]: 2026-01-22 14:27:00.801 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 09:27:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:02.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:03 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:03 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:03 np0005592157 nova_compute[245707]: 2026-01-22 14:27:03.803 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01131016773683231 of space, bias 1.0, pg target 3.393050321049693 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:27:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:27:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:04.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:05 np0005592157 nova_compute[245707]: 2026-01-22 14:27:05.535 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:06.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8c18bbf5-49b4-49a8-b984-77f2162c92e1 does not exist
Jan 22 09:27:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dc47ea63-00f2-471b-97e0-7a5ab971a689 does not exist
Jan 22 09:27:07 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 13cea41b-55b6-4ee2-b395-608f881bc825 does not exist
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:27:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.368681) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027368806, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1065, "num_deletes": 256, "total_data_size": 1315754, "memory_usage": 1346336, "flush_reason": "Manual Compaction"}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027380811, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 1294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47815, "largest_seqno": 48879, "table_properties": {"data_size": 1289998, "index_size": 2212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12371, "raw_average_key_size": 20, "raw_value_size": 1279428, "raw_average_value_size": 2100, "num_data_blocks": 95, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091960, "oldest_key_time": 1769091960, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 12152 microseconds, and 4831 cpu microseconds.
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380878) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 1294814 bytes OK
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380902) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383248) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383270) EVENT_LOG_v1 {"time_micros": 1769092027383266, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383290) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1310707, prev total WAL file size 1310707, number of live WAL files 2.
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.384036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(1264KB)], [104(8412KB)]
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027384138, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9908762, "oldest_snapshot_seqno": -1}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 9364 keys, 9739550 bytes, temperature: kUnknown
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027460517, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 9739550, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9686502, "index_size": 28552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 252907, "raw_average_key_size": 27, "raw_value_size": 9524922, "raw_average_value_size": 1017, "num_data_blocks": 1078, "num_entries": 9364, "num_filter_entries": 9364, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.460758) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9739550 bytes
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.463469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.6 rd, 127.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 9891, records dropped: 527 output_compression: NoCompression
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.463485) EVENT_LOG_v1 {"time_micros": 1769092027463477, "job": 62, "event": "compaction_finished", "compaction_time_micros": 76456, "compaction_time_cpu_micros": 26273, "output_level": 6, "num_output_files": 1, "total_output_size": 9739550, "num_input_records": 9891, "num_output_records": 9364, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027463785, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027465167, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.465205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.465210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.465211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.465212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:27:07.465214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.690399858 +0000 UTC m=+0.060297270 container create 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:27:07 np0005592157 systemd[1]: Started libpod-conmon-27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e.scope.
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.665319214 +0000 UTC m=+0.035216636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.782522018 +0000 UTC m=+0.152419430 container init 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.789732668 +0000 UTC m=+0.159630110 container start 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.793794629 +0000 UTC m=+0.163692031 container attach 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:27:07 np0005592157 systemd[1]: libpod-27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e.scope: Deactivated successfully.
Jan 22 09:27:07 np0005592157 vigilant_swirles[285038]: 167 167
Jan 22 09:27:07 np0005592157 conmon[285038]: conmon 27a73228feb2e0832189 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e.scope/container/memory.events
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.799006168 +0000 UTC m=+0.168903590 container died 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:27:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1abe681d4af463dfe568a6e564cebd19415472dcae3989424eafbdc2f90983cf-merged.mount: Deactivated successfully.
Jan 22 09:27:07 np0005592157 podman[285022]: 2026-01-22 14:27:07.843490424 +0000 UTC m=+0.213387816 container remove 27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:27:07 np0005592157 systemd[1]: libpod-conmon-27a73228feb2e0832189d5175ed52ed22aec2d771e53932ea4c540af99534f8e.scope: Deactivated successfully.
Jan 22 09:27:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:08.02952962 +0000 UTC m=+0.063917171 container create be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:27:08 np0005592157 systemd[1]: Started libpod-conmon-be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d.scope.
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:07.994121179 +0000 UTC m=+0.028508820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:08.129442284 +0000 UTC m=+0.163829865 container init be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:08.140420967 +0000 UTC m=+0.174808498 container start be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:08.144654302 +0000 UTC m=+0.179041883 container attach be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:27:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:08.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:08 np0005592157 heuristic_burnell[285079]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:27:08 np0005592157 heuristic_burnell[285079]: --> relative data size: 1.0
Jan 22 09:27:08 np0005592157 heuristic_burnell[285079]: --> All data devices are unavailable
Jan 22 09:27:08 np0005592157 systemd[1]: libpod-be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d.scope: Deactivated successfully.
Jan 22 09:27:08 np0005592157 podman[285062]: 2026-01-22 14:27:08.994984292 +0000 UTC m=+1.029371873 container died be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:27:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f207806bdbf9cd9ee200a14116d5fd90522d5fd6f0c17bce9e8b0663dccdfad9-merged.mount: Deactivated successfully.
Jan 22 09:27:09 np0005592157 podman[285062]: 2026-01-22 14:27:09.064690885 +0000 UTC m=+1.099078426 container remove be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:27:09 np0005592157 systemd[1]: libpod-conmon-be1ed720b6ef14680a175707552e982053c43acff50e321ba53926baa92f796d.scope: Deactivated successfully.
Jan 22 09:27:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.685446749 +0000 UTC m=+0.066447673 container create a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:27:09 np0005592157 systemd[1]: Started libpod-conmon-a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db.scope.
Jan 22 09:27:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.659435232 +0000 UTC m=+0.040436166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.762364671 +0000 UTC m=+0.143365575 container init a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.769053437 +0000 UTC m=+0.150054351 container start a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:27:09 np0005592157 bold_mendeleev[285266]: 167 167
Jan 22 09:27:09 np0005592157 systemd[1]: libpod-a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db.scope: Deactivated successfully.
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.774662437 +0000 UTC m=+0.155663351 container attach a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.775063857 +0000 UTC m=+0.156064751 container died a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:27:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a93192c0250c17068dfad9661a5b11cc9c016078f540cf659181bb86810ac70c-merged.mount: Deactivated successfully.
Jan 22 09:27:09 np0005592157 podman[285249]: 2026-01-22 14:27:09.81461416 +0000 UTC m=+0.195615054 container remove a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:27:09 np0005592157 systemd[1]: libpod-conmon-a747a42859d0e6e340b5c454de280083500b289752ab801912a321fc81f962db.scope: Deactivated successfully.
Jan 22 09:27:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 09:27:09 np0005592157 podman[285290]: 2026-01-22 14:27:09.977724975 +0000 UTC m=+0.043986204 container create 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:27:10 np0005592157 systemd[1]: Started libpod-conmon-77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe.scope.
Jan 22 09:27:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e685c644100230a6ce4ad5fd31f3ce07ec82ba226399cb045631b4df09f453be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e685c644100230a6ce4ad5fd31f3ce07ec82ba226399cb045631b4df09f453be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e685c644100230a6ce4ad5fd31f3ce07ec82ba226399cb045631b4df09f453be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e685c644100230a6ce4ad5fd31f3ce07ec82ba226399cb045631b4df09f453be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:09.956154519 +0000 UTC m=+0.022415758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:10.052677219 +0000 UTC m=+0.118938438 container init 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:10.058698859 +0000 UTC m=+0.124960058 container start 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:10.063902198 +0000 UTC m=+0.130163407 container attach 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.078 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "a8700e89-4334-472c-bf9a-9e203a561f43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.079 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.079 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "a8700e89-4334-472c-bf9a-9e203a561f43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.080 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.080 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.081 245711 INFO nova.compute.manager [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Terminating instance#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.082 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.082 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquired lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.083 245711 DEBUG nova.network.neutron [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.215 245711 DEBUG nova.network.neutron [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.239 245711 DEBUG oslo_concurrency.lockutils [None req-8b2a2a57-3ca2-4e7a-8c74-a2836c9c0892 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "df283133-db55-4a7e-a651-12dd25bae88e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:10 np0005592157 nova_compute[245707]: 2026-01-22 14:27:10.537 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:10.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]: {
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:    "0": [
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:        {
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "devices": [
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "/dev/loop3"
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            ],
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "lv_name": "ceph_lv0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "lv_size": "7511998464",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "name": "ceph_lv0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "tags": {
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.cluster_name": "ceph",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.crush_device_class": "",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.encrypted": "0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.osd_id": "0",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.type": "block",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:                "ceph.vdo": "0"
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            },
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "type": "block",
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:            "vg_name": "ceph_vg0"
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:        }
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]:    ]
Jan 22 09:27:10 np0005592157 friendly_lovelace[285306]: }
Jan 22 09:27:10 np0005592157 systemd[1]: libpod-77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe.scope: Deactivated successfully.
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:10.840843264 +0000 UTC m=+0.907104563 container died 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:27:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e685c644100230a6ce4ad5fd31f3ce07ec82ba226399cb045631b4df09f453be-merged.mount: Deactivated successfully.
Jan 22 09:27:10 np0005592157 podman[285290]: 2026-01-22 14:27:10.915847469 +0000 UTC m=+0.982108668 container remove 77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lovelace, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:27:10 np0005592157 systemd[1]: libpod-conmon-77e2d04f5943e197705304eec20717cddff1dc5cc0ce7d27c5fc08226ec989fe.scope: Deactivated successfully.
Jan 22 09:27:11 np0005592157 nova_compute[245707]: 2026-01-22 14:27:11.111 245711 DEBUG nova.network.neutron [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:27:11 np0005592157 nova_compute[245707]: 2026-01-22 14:27:11.129 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Releasing lock "refresh_cache-a8700e89-4334-472c-bf9a-9e203a561f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:27:11 np0005592157 nova_compute[245707]: 2026-01-22 14:27:11.130 245711 DEBUG nova.compute.manager [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:27:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:11.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.559919812 +0000 UTC m=+0.041467372 container create 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:27:11 np0005592157 systemd[1]: Started libpod-conmon-348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c.scope.
Jan 22 09:27:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.542075919 +0000 UTC m=+0.023623569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.638499566 +0000 UTC m=+0.120047146 container init 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.646185127 +0000 UTC m=+0.127732687 container start 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.649089369 +0000 UTC m=+0.130636949 container attach 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:27:11 np0005592157 tender_pare[285486]: 167 167
Jan 22 09:27:11 np0005592157 systemd[1]: libpod-348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c.scope: Deactivated successfully.
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.650765491 +0000 UTC m=+0.132313061 container died 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:27:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3ede9c341154c2f2e006ff44b93e5b0e3aa3fe45147e52e61ff6cfb4b0614ce8-merged.mount: Deactivated successfully.
Jan 22 09:27:11 np0005592157 podman[285470]: 2026-01-22 14:27:11.690262303 +0000 UTC m=+0.171809873 container remove 348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:27:11 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:11 np0005592157 systemd[1]: libpod-conmon-348bf9734eaa04f73f4288606c8b5937797c5b9f1b693b083df11567fe2eaf8c.scope: Deactivated successfully.
Jan 22 09:27:11 np0005592157 podman[285509]: 2026-01-22 14:27:11.870879694 +0000 UTC m=+0.055851030 container create a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:27:11 np0005592157 systemd[1]: Started libpod-conmon-a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016.scope.
Jan 22 09:27:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Jan 22 09:27:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:27:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7147d833572a82372d4c67f06d3f567dd1fc98f3e2ac353c96b7a8e9618af44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7147d833572a82372d4c67f06d3f567dd1fc98f3e2ac353c96b7a8e9618af44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7147d833572a82372d4c67f06d3f567dd1fc98f3e2ac353c96b7a8e9618af44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7147d833572a82372d4c67f06d3f567dd1fc98f3e2ac353c96b7a8e9618af44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:27:11 np0005592157 podman[285509]: 2026-01-22 14:27:11.9483837 +0000 UTC m=+0.133355036 container init a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:27:11 np0005592157 podman[285509]: 2026-01-22 14:27:11.855109671 +0000 UTC m=+0.040081027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:27:11 np0005592157 podman[285509]: 2026-01-22 14:27:11.95600316 +0000 UTC m=+0.140974496 container start a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:27:11 np0005592157 podman[285509]: 2026-01-22 14:27:11.960515472 +0000 UTC m=+0.145486828 container attach a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:12.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]: {
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:        "osd_id": 0,
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:        "type": "bluestore"
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]:    }
Jan 22 09:27:12 np0005592157 exciting_bouman[285526]: }
Jan 22 09:27:12 np0005592157 systemd[1]: libpod-a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016.scope: Deactivated successfully.
Jan 22 09:27:12 np0005592157 podman[285509]: 2026-01-22 14:27:12.805000987 +0000 UTC m=+0.989972323 container died a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:27:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a7147d833572a82372d4c67f06d3f567dd1fc98f3e2ac353c96b7a8e9618af44-merged.mount: Deactivated successfully.
Jan 22 09:27:12 np0005592157 podman[285509]: 2026-01-22 14:27:12.856799075 +0000 UTC m=+1.041770411 container remove a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:27:12 np0005592157 systemd[1]: libpod-conmon-a3dd70428398abbc1f159773226ac44a2f2c60969bf37e7cd96d21ecaeecc016.scope: Deactivated successfully.
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:27:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 458550a8-4f90-4241-8d17-5698f6d444d9 does not exist
Jan 22 09:27:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fd437dc3-b798-4891-be03-802a6870ff44 does not exist
Jan 22 09:27:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f1fa67df-5b34-40b2-8196-874b3d809a85 does not exist
Jan 22 09:27:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:13.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 09:27:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:14.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:15.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.539 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.542 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.542 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.542 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.542 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:15 np0005592157 nova_compute[245707]: 2026-01-22 14:27:15.544 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 09:27:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:16.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:17.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:17 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:17 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:27:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:18.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:19.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:19 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:27:20 np0005592157 nova_compute[245707]: 2026-01-22 14:27:20.543 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:21 np0005592157 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 22 09:27:21 np0005592157 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000f.scope: Consumed 23.174s CPU time.
Jan 22 09:27:21 np0005592157 systemd-machined[211644]: Machine qemu-3-instance-0000000f terminated.
Jan 22 09:27:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:21 np0005592157 nova_compute[245707]: 2026-01-22 14:27:21.374 245711 INFO nova.virt.libvirt.driver [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance destroyed successfully.#033[00m
Jan 22 09:27:21 np0005592157 nova_compute[245707]: 2026-01-22 14:27:21.374 245711 DEBUG nova.objects.instance [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'resources' on Instance uuid a8700e89-4334-472c-bf9a-9e203a561f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:27:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:27:21 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:22.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:23 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:27:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:24 np0005592157 podman[285691]: 2026-01-22 14:27:24.333930658 +0000 UTC m=+0.065750496 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 09:27:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:24.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:25.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:25 np0005592157 nova_compute[245707]: 2026-01-22 14:27:25.546 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:27 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:27.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:28 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:28 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:29 np0005592157 podman[285714]: 2026-01-22 14:27:29.354990092 +0000 UTC m=+0.091308301 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:27:29 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:29.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:30 np0005592157 nova_compute[245707]: 2026-01-22 14:27:30.547 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:30 np0005592157 nova_compute[245707]: 2026-01-22 14:27:30.548 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:31.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:31 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:31 np0005592157 nova_compute[245707]: 2026-01-22 14:27:31.459 245711 WARNING nova.storage.rbd_utils [-] rbd remove a8700e89-4334-472c-bf9a-9e203a561f43_disk in pool vms failed: rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)#033[00m
Jan 22 09:27:31 np0005592157 nova_compute[245707]: 2026-01-22 14:27:31.461 245711 WARNING oslo.service.loopingcall [-] Function 'nova.storage.rbd_utils.RBDDriver._destroy_volume.<locals>._cleanup_vol' run outlasted interval by 5.04 sec#033[00m
Jan 22 09:27:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:32 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:32.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:27:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:33.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:27:33 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:33 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:34.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:34 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.549 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.550 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.550 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.550 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.551 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:35 np0005592157 nova_compute[245707]: 2026-01-22 14:27:35.552 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 09:27:36 np0005592157 nova_compute[245707]: 2026-01-22 14:27:36.373 245711 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092041.3706384, a8700e89-4334-472c-bf9a-9e203a561f43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:27:36 np0005592157 nova_compute[245707]: 2026-01-22 14:27:36.373 245711 INFO nova.compute.manager [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:27:36 np0005592157 nova_compute[245707]: 2026-01-22 14:27:36.407 245711 DEBUG nova.compute.manager [None req-c5930d6d-f4b7-464b-b06b-a397b9d21dd8 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:27:36 np0005592157 nova_compute[245707]: 2026-01-22 14:27:36.411 245711 DEBUG nova.compute.manager [None req-c5930d6d-f4b7-464b-b06b-a397b9d21dd8 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:27:36 np0005592157 nova_compute[245707]: 2026-01-22 14:27:36.440 245711 INFO nova.compute.manager [None req-c5930d6d-f4b7-464b-b06b-a397b9d21dd8 - - - - - -] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 22 09:27:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:36.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:37.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:38 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:40 np0005592157 nova_compute[245707]: 2026-01-22 14:27:40.551 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:40.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:41 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:41.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:41 np0005592157 nova_compute[245707]: 2026-01-22 14:27:41.562 245711 WARNING nova.storage.rbd_utils [-] rbd remove a8700e89-4334-472c-bf9a-9e203a561f43_disk in pool vms failed: rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)#033[00m
Jan 22 09:27:41 np0005592157 nova_compute[245707]: 2026-01-22 14:27:41.562 245711 WARNING oslo.service.loopingcall [-] Function 'nova.storage.rbd_utils.RBDDriver._destroy_volume.<locals>._cleanup_vol' run outlasted interval by 5.10 sec#033[00m
Jan 22 09:27:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:42 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:42.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:43.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:43 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:43 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:44 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:45 np0005592157 nova_compute[245707]: 2026-01-22 14:27:45.553 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:45 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 09:27:46 np0005592157 nova_compute[245707]: 2026-01-22 14:27:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:46 np0005592157 nova_compute[245707]: 2026-01-22 14:27:46.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:27:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:27:47
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'images', '.rgw.root', 'volumes']
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:27:47.600 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:27:47.600 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:27:47.600 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:48 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:48 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:49 np0005592157 nova_compute[245707]: 2026-01-22 14:27:49.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:49 np0005592157 nova_compute[245707]: 2026-01-22 14:27:49.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:49.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:49 np0005592157 nova_compute[245707]: 2026-01-22 14:27:49.679 245711 WARNING nova.storage.rbd_utils [-] rbd remove a8700e89-4334-472c-bf9a-9e203a561f43_disk in pool vms failed: rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)#033[00m
Jan 22 09:27:49 np0005592157 nova_compute[245707]: 2026-01-22 14:27:49.680 245711 WARNING oslo.service.loopingcall [-] Function 'nova.storage.rbd_utils.RBDDriver._destroy_volume.<locals>._cleanup_vol' run outlasted interval by 3.12 sec#033[00m
Jan 22 09:27:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 09:27:50 np0005592157 nova_compute[245707]: 2026-01-22 14:27:50.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:50 np0005592157 nova_compute[245707]: 2026-01-22 14:27:50.555 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:51.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:51 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:27:51 np0005592157 nova_compute[245707]: 2026-01-22 14:27:51.927 245711 INFO nova.virt.libvirt.driver [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Deleting instance files /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43_del#033[00m
Jan 22 09:27:51 np0005592157 nova_compute[245707]: 2026-01-22 14:27:51.928 245711 INFO nova.virt.libvirt.driver [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Deletion of /var/lib/nova/instances/a8700e89-4334-472c-bf9a-9e203a561f43_del complete#033[00m
Jan 22 09:27:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 09:27:52 np0005592157 nova_compute[245707]: 2026-01-22 14:27:52.236 245711 INFO nova.compute.manager [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Took 41.11 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:27:52 np0005592157 nova_compute[245707]: 2026-01-22 14:27:52.237 245711 DEBUG oslo.service.loopingcall [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:27:52 np0005592157 nova_compute[245707]: 2026-01-22 14:27:52.237 245711 DEBUG nova.compute.manager [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:27:52 np0005592157 nova_compute[245707]: 2026-01-22 14:27:52.237 245711 DEBUG nova.network.neutron [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:27:52 np0005592157 nova_compute[245707]: 2026-01-22 14:27:52.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:52 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:52 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.8 KiB/s rd, 341 B/s wr, 9 op/s
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.001 245711 DEBUG nova.network.neutron [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.018 245711 DEBUG nova.network.neutron [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.034 245711 INFO nova.compute.manager [-] [instance: a8700e89-4334-472c-bf9a-9e203a561f43] Took 1.80 seconds to deallocate network for instance.#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.071 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.071 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.191 245711 DEBUG oslo_concurrency.processutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:27:54 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:54 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:27:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715672213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.609 245711 DEBUG oslo_concurrency.processutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.616 245711 DEBUG nova.compute.provider_tree [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.636 245711 DEBUG nova.scheduler.client.report [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.655 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.676 245711 INFO nova.scheduler.client.report [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Deleted allocations for instance a8700e89-4334-472c-bf9a-9e203a561f43#033[00m
Jan 22 09:27:54 np0005592157 nova_compute[245707]: 2026-01-22 14:27:54.741 245711 DEBUG oslo_concurrency.lockutils [None req-e1aa9797-c10c-41a8-8233-2c25ca3c963e a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "a8700e89-4334-472c-bf9a-9e203a561f43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 44.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.271 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.271 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.271 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:27:55 np0005592157 podman[285826]: 2026-01-22 14:27:55.310447356 +0000 UTC m=+0.047962183 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:27:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:55.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.557 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.558 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.558 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.558 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.559 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:27:55 np0005592157 nova_compute[245707]: 2026-01-22 14:27:55.560 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 09:27:56 np0005592157 nova_compute[245707]: 2026-01-22 14:27:56.268 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 09:27:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:27:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:27:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:59 np0005592157 nova_compute[245707]: 2026-01-22 14:27:59.239 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:27:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.273 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.277 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.278 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.279 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:00 np0005592157 podman[285898]: 2026-01-22 14:28:00.397814649 +0000 UTC m=+0.130666209 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:28:00 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.560 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:28:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397044635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:28:00 np0005592157 nova_compute[245707]: 2026-01-22 14:28:00.793 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.154 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.155 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.285 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.286 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4685MB free_disk=20.77179718017578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.286 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.286 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.367 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.367 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.368 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.368 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.368 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.368 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.368 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:28:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.481 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 09:28:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:28:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668164319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.970 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.977 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:28:01 np0005592157 nova_compute[245707]: 2026-01-22 14:28:01.994 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:28:02 np0005592157 nova_compute[245707]: 2026-01-22 14:28:02.016 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:28:02 np0005592157 nova_compute[245707]: 2026-01-22 14:28:02.016 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:02.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:02 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:03.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 09:28:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010314521100874744 of space, bias 1.0, pg target 3.0943563302624235 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:28:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:28:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:05 np0005592157 nova_compute[245707]: 2026-01-22 14:28:05.017 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:05.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:05 np0005592157 nova_compute[245707]: 2026-01-22 14:28:05.563 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 09:28:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:07.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 09:28:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:08 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:08.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:09.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 683 KiB/s rd, 1 op/s
Jan 22 09:28:10 np0005592157 nova_compute[245707]: 2026-01-22 14:28:10.564 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:11.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:28:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:13.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:28:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:28:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:28:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:28:15 np0005592157 nova_compute[245707]: 2026-01-22 14:28:15.567 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:28:15 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 39 op/s
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:28:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:16.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b149e248-f2cf-42b3-9ba7-08625ecaf574 does not exist
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 99a4f9bc-a8bf-4e11-a37b-e3f666c29839 does not exist
Jan 22 09:28:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fb37891e-4eb9-4146-9645-c9fc049f3d58 does not exist
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:28:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:17.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.354345774 +0000 UTC m=+0.022091320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.652034225 +0000 UTC m=+0.319779751 container create 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:28:17 np0005592157 systemd[1]: Started libpod-conmon-94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c.scope.
Jan 22 09:28:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.811020458 +0000 UTC m=+0.478766064 container init 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.820017932 +0000 UTC m=+0.487763488 container start 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:28:17 np0005592157 wizardly_haslett[286316]: 167 167
Jan 22 09:28:17 np0005592157 systemd[1]: libpod-94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c.scope: Deactivated successfully.
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.83281014 +0000 UTC m=+0.500555666 container attach 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.833540468 +0000 UTC m=+0.501285994 container died 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:28:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1b6d19f2073e8e5bf874ea76ee2b069ee3c8e6e450cfcd44d1fcde91ad6e5b1d-merged.mount: Deactivated successfully.
Jan 22 09:28:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 22 09:28:17 np0005592157 podman[286249]: 2026-01-22 14:28:17.959062359 +0000 UTC m=+0.626807885 container remove 94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_haslett, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:28:17 np0005592157 systemd[1]: libpod-conmon-94c9067af19c6c8f72bba59aa65997d947c17b28145c3c1f91eb3db8671d139c.scope: Deactivated successfully.
Jan 22 09:28:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592157 podman[286341]: 2026-01-22 14:28:18.105756816 +0000 UTC m=+0.027097524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:18 np0005592157 podman[286341]: 2026-01-22 14:28:18.315609333 +0000 UTC m=+0.236950051 container create 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:28:18 np0005592157 systemd[1]: Started libpod-conmon-23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea.scope.
Jan 22 09:28:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:18 np0005592157 podman[286341]: 2026-01-22 14:28:18.448484147 +0000 UTC m=+0.369824865 container init 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:28:18 np0005592157 podman[286341]: 2026-01-22 14:28:18.462113326 +0000 UTC m=+0.383454004 container start 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:28:18 np0005592157 podman[286341]: 2026-01-22 14:28:18.504629493 +0000 UTC m=+0.425970201 container attach 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:28:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:18.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:19 np0005592157 mystifying_hermann[286357]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:28:19 np0005592157 mystifying_hermann[286357]: --> relative data size: 1.0
Jan 22 09:28:19 np0005592157 mystifying_hermann[286357]: --> All data devices are unavailable
Jan 22 09:28:19 np0005592157 systemd[1]: libpod-23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea.scope: Deactivated successfully.
Jan 22 09:28:19 np0005592157 podman[286341]: 2026-01-22 14:28:19.336135016 +0000 UTC m=+1.257475694 container died 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:28:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:19.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f82a9875cc4f38d438e8e5da713b29c71fa449557f9eb2d77af70852776996b4-merged.mount: Deactivated successfully.
Jan 22 09:28:19 np0005592157 podman[286341]: 2026-01-22 14:28:19.559378746 +0000 UTC m=+1.480719424 container remove 23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hermann, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:28:19 np0005592157 systemd[1]: libpod-conmon-23ef3b4ed1226b2b5f6650c741617096ed24bf486074668c0de43ef30c0467ea.scope: Deactivated successfully.
Jan 22 09:28:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 22 09:28:19 np0005592157 nova_compute[245707]: 2026-01-22 14:28:19.997 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.000 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.024 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.092 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.092 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.100 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.100 245711 INFO nova.compute.claims [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:28:20 np0005592157 podman[286526]: 2026-01-22 14:28:20.136871123 +0000 UTC m=+0.025232477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.302 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:20 np0005592157 podman[286526]: 2026-01-22 14:28:20.505908318 +0000 UTC m=+0.394269552 container create 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.568 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:28:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632739966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:28:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.881 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.889 245711 DEBUG nova.compute.provider_tree [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:28:20 np0005592157 nova_compute[245707]: 2026-01-22 14:28:20.915 245711 DEBUG nova.scheduler.client.report [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:28:20 np0005592157 systemd[1]: Started libpod-conmon-37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a.scope.
Jan 22 09:28:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.012 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.013 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.118 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.118 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.136 245711 INFO nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.176 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:28:21 np0005592157 podman[286526]: 2026-01-22 14:28:21.235831475 +0000 UTC m=+1.124192719 container init 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:28:21 np0005592157 podman[286526]: 2026-01-22 14:28:21.248437379 +0000 UTC m=+1.136798613 container start 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:28:21 np0005592157 vigorous_galois[286564]: 167 167
Jan 22 09:28:21 np0005592157 systemd[1]: libpod-37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a.scope: Deactivated successfully.
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.264 245711 DEBUG nova.policy [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '32df6d966d7540dd851bf51a1148be65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b4b5b635cbf4888966d80692b78281f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.386 245711 INFO nova.virt.block_device [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Booting with volume 6e173a8e-fd98-4de4-a470-2c50f67a6d48 at /dev/vda#033[00m
Jan 22 09:28:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:21.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.589 245711 DEBUG os_brick.utils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 22 09:28:21 np0005592157 nova_compute[245707]: 2026-01-22 14:28:21.591 245711 INFO oslo.privsep.daemon [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpe8vy1ait/privsep.sock']#033[00m
Jan 22 09:28:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 09:28:22 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:22.088 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.088 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:22 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:22.089 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:28:22 np0005592157 podman[286526]: 2026-01-22 14:28:22.253816014 +0000 UTC m=+2.142177268 container attach 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:28:22 np0005592157 podman[286526]: 2026-01-22 14:28:22.254302056 +0000 UTC m=+2.142663290 container died 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:28:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e3f1f3921c1a3b5a833a74a9d953dd94de2bd5c8b42e8f340bc52d76ec44c50f-merged.mount: Deactivated successfully.
Jan 22 09:28:22 np0005592157 podman[286526]: 2026-01-22 14:28:22.371326016 +0000 UTC m=+2.259687250 container remove 37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:28:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:22 np0005592157 systemd[1]: libpod-conmon-37ba35c77c108abdd68aa7a7738f8f916ec575b88cc4d8ebe0897e7e4db51d8a.scope: Deactivated successfully.
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.524 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Successfully created port: 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.533 245711 INFO oslo.privsep.daemon [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.401 286587 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.405 286587 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.406 286587 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.406 286587 INFO oslo.privsep.daemon [-] privsep daemon running as pid 286587#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.536 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[226b5088-00a4-423a-bd72-9def7c39ea92]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:22 np0005592157 podman[286593]: 2026-01-22 14:28:22.544517742 +0000 UTC m=+0.048028485 container create 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:28:22 np0005592157 systemd[1]: Started libpod-conmon-2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41.scope.
Jan 22 09:28:22 np0005592157 podman[286593]: 2026-01-22 14:28:22.521640893 +0000 UTC m=+0.025151666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69658aa44ef67c9db9e9be619d82a2a0b79088cbf7d09d4f08b595bcfe740c13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69658aa44ef67c9db9e9be619d82a2a0b79088cbf7d09d4f08b595bcfe740c13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69658aa44ef67c9db9e9be619d82a2a0b79088cbf7d09d4f08b595bcfe740c13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69658aa44ef67c9db9e9be619d82a2a0b79088cbf7d09d4f08b595bcfe740c13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.628 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.645 286587 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.645 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0dff73-d1fc-45c6-98ec-a934a264f1fb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:22 np0005592157 podman[286593]: 2026-01-22 14:28:22.646838496 +0000 UTC m=+0.150349269 container init 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.647 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.654 286587 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.654 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[51adea06-247d-4152-b3fb-91b7fb46527d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:efea51d9988', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.656 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:22 np0005592157 podman[286593]: 2026-01-22 14:28:22.658318621 +0000 UTC m=+0.161829374 container start 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:28:22 np0005592157 podman[286593]: 2026-01-22 14:28:22.661949531 +0000 UTC m=+0.165460314 container attach 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.670 286587 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.670 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[2e816dfc-5919-429b-ac57-07c537bba255]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.673 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5760fe-233b-4843-aa51-6b3341f8a095]: (4, 'f2612c2e-5bb2-49d6-9db0-33d2b0e700a7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.673 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.697 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.700 245711 DEBUG os_brick.initiator.connectors.lightos [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.701 245711 DEBUG os_brick.initiator.connectors.lightos [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.701 245711 DEBUG os_brick.initiator.connectors.lightos [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.701 245711 DEBUG os_brick.utils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] <== get_connector_properties: return (1111ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:efea51d9988', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'f2612c2e-5bb2-49d6-9db0-33d2b0e700a7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 22 09:28:22 np0005592157 nova_compute[245707]: 2026-01-22 14:28:22.701 245711 DEBUG nova.virt.block_device [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating existing volume attachment record: d5a14597-bdb5-4f11-9e87-410238b00d48 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 22 09:28:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:23.092 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:23.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]: {
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:    "0": [
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:        {
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "devices": [
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "/dev/loop3"
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            ],
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "lv_name": "ceph_lv0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "lv_size": "7511998464",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "name": "ceph_lv0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "tags": {
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.cluster_name": "ceph",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.crush_device_class": "",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.encrypted": "0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.osd_id": "0",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.type": "block",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:                "ceph.vdo": "0"
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            },
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "type": "block",
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:            "vg_name": "ceph_vg0"
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:        }
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]:    ]
Jan 22 09:28:23 np0005592157 vigilant_stonebraker[286610]: }
Jan 22 09:28:23 np0005592157 systemd[1]: libpod-2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41.scope: Deactivated successfully.
Jan 22 09:28:23 np0005592157 podman[286593]: 2026-01-22 14:28:23.464129195 +0000 UTC m=+0.967639968 container died 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:28:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-69658aa44ef67c9db9e9be619d82a2a0b79088cbf7d09d4f08b595bcfe740c13-merged.mount: Deactivated successfully.
Jan 22 09:28:23 np0005592157 podman[286593]: 2026-01-22 14:28:23.535027538 +0000 UTC m=+1.038538291 container remove 2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:28:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267242123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:28:23 np0005592157 systemd[1]: libpod-conmon-2d4bfb431319e307f862f344d550ac8ce2de2933c2948b163aa60ad7b9496e41.scope: Deactivated successfully.
Jan 22 09:28:23 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:28:23 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.584 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Successfully updated port: 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.629 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.630 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.630 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.778 245711 DEBUG nova.compute.manager [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.779 245711 DEBUG nova.compute.manager [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing instance network info cache due to event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.779 245711 DEBUG oslo_concurrency.lockutils [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:23 np0005592157 nova_compute[245707]: 2026-01-22 14:28:23.874 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:28:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.100415994 +0000 UTC m=+0.035341780 container create d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:28:24 np0005592157 systemd[1]: Started libpod-conmon-d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360.scope.
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.140 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.142 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.142 245711 INFO nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating image(s)#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.142 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.143 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Ensure instance console log exists: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.143 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.143 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:24 np0005592157 nova_compute[245707]: 2026-01-22 14:28:24.144 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.173589953 +0000 UTC m=+0.108515759 container init d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.18070285 +0000 UTC m=+0.115628636 container start d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.085539024 +0000 UTC m=+0.020464830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.184042053 +0000 UTC m=+0.118967829 container attach d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:28:24 np0005592157 modest_cohen[286795]: 167 167
Jan 22 09:28:24 np0005592157 systemd[1]: libpod-d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360.scope: Deactivated successfully.
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.186775201 +0000 UTC m=+0.121700987 container died d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:28:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0aeb5dae1eec05ed7e5031c1f6e90c2df9966781355a4c8a1ec54f7924926688-merged.mount: Deactivated successfully.
Jan 22 09:28:24 np0005592157 podman[286779]: 2026-01-22 14:28:24.230127159 +0000 UTC m=+0.165052985 container remove d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_cohen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:28:24 np0005592157 systemd[1]: libpod-conmon-d7ec4e1bf0cd9bc260f8874f023b9e0dd7888682ed01b09e15375b8da5814360.scope: Deactivated successfully.
Jan 22 09:28:24 np0005592157 podman[286819]: 2026-01-22 14:28:24.423817034 +0000 UTC m=+0.049776588 container create 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:28:24 np0005592157 systemd[1]: Started libpod-conmon-2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6.scope.
Jan 22 09:28:24 np0005592157 podman[286819]: 2026-01-22 14:28:24.396585847 +0000 UTC m=+0.022545471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:28:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c057dc25bd33c4dd7e19fff4231389817aa0868615321acc3801ade2702219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c057dc25bd33c4dd7e19fff4231389817aa0868615321acc3801ade2702219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c057dc25bd33c4dd7e19fff4231389817aa0868615321acc3801ade2702219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c057dc25bd33c4dd7e19fff4231389817aa0868615321acc3801ade2702219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:24 np0005592157 podman[286819]: 2026-01-22 14:28:24.513300429 +0000 UTC m=+0.139260033 container init 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:28:24 np0005592157 podman[286819]: 2026-01-22 14:28:24.518803626 +0000 UTC m=+0.144763170 container start 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:28:24 np0005592157 podman[286819]: 2026-01-22 14:28:24.522828766 +0000 UTC m=+0.148788320 container attach 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:28:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:25 np0005592157 jolly_allen[286835]: {
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:        "osd_id": 0,
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:        "type": "bluestore"
Jan 22 09:28:25 np0005592157 jolly_allen[286835]:    }
Jan 22 09:28:25 np0005592157 jolly_allen[286835]: }
Jan 22 09:28:25 np0005592157 systemd[1]: libpod-2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6.scope: Deactivated successfully.
Jan 22 09:28:25 np0005592157 podman[286857]: 2026-01-22 14:28:25.407532642 +0000 UTC m=+0.024032739 container died 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:28:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-05c057dc25bd33c4dd7e19fff4231389817aa0868615321acc3801ade2702219-merged.mount: Deactivated successfully.
Jan 22 09:28:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:25.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:25 np0005592157 podman[286856]: 2026-01-22 14:28:25.459620277 +0000 UTC m=+0.064859184 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 09:28:25 np0005592157 podman[286857]: 2026-01-22 14:28:25.468098488 +0000 UTC m=+0.084598555 container remove 2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:28:25 np0005592157 systemd[1]: libpod-conmon-2cfc9004590dfce42f54ebd5fd1d765f5e4d7510ee1b4bbafce9f61d1ce39ee6.scope: Deactivated successfully.
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2c937761-35a1-49ee-ad0d-86ae257a5778 does not exist
Jan 22 09:28:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ccc296f4-359e-4286-b9ca-c78c6cbe3e27 does not exist
Jan 22 09:28:25 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fcf8061b-1371-44c2-abb4-13032971f99d does not exist
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592157 nova_compute[245707]: 2026-01-22 14:28:25.570 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 09:28:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.801 245711 DEBUG nova.network.neutron [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.934 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.934 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance network_info: |[{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.935 245711 DEBUG oslo_concurrency.lockutils [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.935 245711 DEBUG nova.network.neutron [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.939 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Start _get_guest_xml network_info=[{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'd5a14597-bdb5-4f11-9e87-410238b00d48', 'delete_on_termination': True, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e173a8e-fd98-4de4-a470-2c50f67a6d48', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'attached_at': '', 'detached_at': '', 'volume_id': '6e173a8e-fd98-4de4-a470-2c50f67a6d48', 'serial': '6e173a8e-fd98-4de4-a470-2c50f67a6d48'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.944 245711 WARNING nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.951 245711 DEBUG nova.virt.libvirt.host [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.951 245711 DEBUG nova.virt.libvirt.host [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.955 245711 DEBUG nova.virt.libvirt.host [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.955 245711 DEBUG nova.virt.libvirt.host [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.956 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.956 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.957 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.957 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.957 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.957 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.958 245711 DEBUG nova.virt.hardware [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.988 245711 DEBUG nova.storage.rbd_utils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:28:26 np0005592157 nova_compute[245707]: 2026-01-22 14:28:26.992 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:28:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420612954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:28:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.448 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.449 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.450 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.451 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:27 np0005592157 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:28:27 np0005592157 systemd[1]: Started libvirt secret daemon.
Jan 22 09:28:27 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.648 245711 DEBUG nova.virt.libvirt.vif [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:28:21Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.650 245711 DEBUG nova.network.os_vif_util [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.651 245711 DEBUG nova.network.os_vif_util [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.652 245711 DEBUG nova.objects.instance [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.681 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <uuid>5e2e07b8-ca9c-4abc-81b0-66964eb87fa4</uuid>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <name>instance-00000012</name>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <memory>131072</memory>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <vcpu>1</vcpu>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <metadata>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:name>tempest-LiveMigrationTest-server-1735692043</nova:name>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:creationTime>2026-01-22 14:28:26</nova:creationTime>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:flavor name="m1.nano">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:memory>128</nova:memory>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:disk>1</nova:disk>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:swap>0</nova:swap>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </nova:flavor>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:owner>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:user uuid="32df6d966d7540dd851bf51a1148be65">tempest-LiveMigrationTest-1708062570-project-member</nova:user>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:project uuid="6b4b5b635cbf4888966d80692b78281f">tempest-LiveMigrationTest-1708062570</nova:project>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </nova:owner>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <nova:ports>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <nova:port uuid="2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        </nova:port>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </nova:ports>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </nova:instance>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </metadata>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <sysinfo type="smbios">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <system>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="serial">5e2e07b8-ca9c-4abc-81b0-66964eb87fa4</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="uuid">5e2e07b8-ca9c-4abc-81b0-66964eb87fa4</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </system>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </sysinfo>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <os>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <boot dev="hd"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <smbios mode="sysinfo"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </os>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <features>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <acpi/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <apic/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <vmcoreinfo/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </features>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <clock offset="utc">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <timer name="hpet" present="no"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </clock>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <cpu mode="custom" match="exact">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <model>Nehalem</model>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </cpu>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  <devices>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <disk type="network" device="cdrom">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <driver type="raw" cache="none"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="vms/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <target dev="sda" bus="sata"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <disk type="network" device="disk">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <source protocol="rbd" name="volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </source>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <auth username="openstack">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      </auth>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <target dev="vda" bus="virtio"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <serial>6e173a8e-fd98-4de4-a470-2c50f67a6d48</serial>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </disk>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <interface type="ethernet">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <mac address="fa:16:3e:f9:af:b6"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <mtu size="1442"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <target dev="tap2b1b16d5-1e"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </interface>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <serial type="pty">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <log file="/var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/console.log" append="off"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </serial>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <video>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <model type="virtio"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </video>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <input type="tablet" bus="usb"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <rng model="virtio">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </rng>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <controller type="usb" index="0"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    <memballoon model="virtio">
Jan 22 09:28:27 np0005592157 nova_compute[245707]:      <stats period="10"/>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:    </memballoon>
Jan 22 09:28:27 np0005592157 nova_compute[245707]:  </devices>
Jan 22 09:28:27 np0005592157 nova_compute[245707]: </domain>
Jan 22 09:28:27 np0005592157 nova_compute[245707]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.682 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Preparing to wait for external event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.682 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.682 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.682 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.683 245711 DEBUG nova.virt.libvirt.vif [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:28:21Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.683 245711 DEBUG nova.network.os_vif_util [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.684 245711 DEBUG nova.network.os_vif_util [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.684 245711 DEBUG os_vif [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.685 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.685 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.686 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.691 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.691 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b1b16d5-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.692 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b1b16d5-1e, col_values=(('external_ids', {'iface-id': '2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:af:b6', 'vm-uuid': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.693 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.695 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:27 np0005592157 NetworkManager[48997]: <info>  [1769092107.6954] manager: (tap2b1b16d5-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.703 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.704 245711 INFO os_vif [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.867 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.867 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.868 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] No VIF found with MAC fa:16:3e:f9:af:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.868 245711 INFO nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Using config drive#033[00m
Jan 22 09:28:27 np0005592157 nova_compute[245707]: 2026-01-22 14:28:27.896 245711 DEBUG nova.storage.rbd_utils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:28:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 586 KiB/s wr, 2 op/s
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.558 245711 INFO nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating config drive at /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.564 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkft7ctn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:28 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.696 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkft7ctn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.730 245711 DEBUG nova.storage.rbd_utils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] rbd image 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.738 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.902 245711 DEBUG oslo_concurrency.processutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.904 245711 INFO nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting local config drive /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/disk.config because it was imported into RBD.#033[00m
Jan 22 09:28:28 np0005592157 NetworkManager[48997]: <info>  [1769092108.9543] manager: (tap2b1b16d5-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 22 09:28:28 np0005592157 kernel: tap2b1b16d5-1e: entered promiscuous mode
Jan 22 09:28:28 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:28Z|00038|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this chassis.
Jan 22 09:28:28 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:28Z|00039|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.965 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.972 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:28 np0005592157 systemd-machined[211644]: New machine qemu-4-instance-00000012.
Jan 22 09:28:28 np0005592157 systemd-udevd[287073]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:28:28 np0005592157 NetworkManager[48997]: <info>  [1769092108.9944] device (tap2b1b16d5-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:28:28 np0005592157 NetworkManager[48997]: <info>  [1769092108.9952] device (tap2b1b16d5-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.992 245711 DEBUG nova.network.neutron [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updated VIF entry in instance network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:28:28 np0005592157 nova_compute[245707]: 2026-01-22 14:28:28.993 245711 DEBUG nova.network.neutron [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:29 np0005592157 systemd[1]: Started Virtual Machine qemu-4-instance-00000012.
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.013 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.015 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 bound to our chassis#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.017 157426 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b247a422-e88b-4d6e-9b42-d4947ce89ea4#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.023 245711 DEBUG oslo_concurrency.lockutils [req-304a5278-cfc3-465c-867f-c265340b00cf req-ac2472f7-b7a9-42f6-b762-288df7c4dcbf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:29 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:29Z|00040|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b ovn-installed in OVS
Jan 22 09:28:29 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:29Z|00041|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b up in Southbound
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.035 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.037 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d85723-12c2-49bb-8899-c19c7fe82f3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.039 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb247a422-e1 in ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.042 264865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb247a422-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.043 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[53435fbc-b6f9-41aa-b3f6-1bd1d0d19816]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.044 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[1443aa56-a31a-42c3-8317-70547fea918b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.065 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8d7a64-89eb-4ca6-91bb-0cee5fd082cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.082 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[e34fba31-651d-40d9-838a-5627bb6b8cf7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.114 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[048fb7c6-a447-4617-b369-0238532c4cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 NetworkManager[48997]: <info>  [1769092109.1212] manager: (tapb247a422-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.121 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc5c596-9caa-4505-b5d0-d96c87411cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.156 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[82f489aa-060a-46c6-abb4-c96c7d71694e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.162 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a344e7-048c-46b1-acd1-3d78ddc2e88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 NetworkManager[48997]: <info>  [1769092109.1809] device (tapb247a422-e0): carrier: link connected
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.184 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[ad493731-5259-400d-84e1-7d81b9c77749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.199 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5a5f5-5b9e-414f-807e-a5e6ffaa2926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590412, 'reachable_time': 36119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287107, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.213 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[8497cae4-e21f-456d-97ca-d80fa09faee0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:2b35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590412, 'tstamp': 590412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287108, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.228 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[8087d0bd-c8d7-475b-a285-dd41ce28a79a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590412, 'reachable_time': 36119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287109, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.260 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[aceb0a60-ba9c-41cb-a70b-ecfe72077d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.308 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[8767641f-02c9-4bab-b890-2fb3274abed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.310 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.310 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.311 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb247a422-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.312 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592157 kernel: tapb247a422-e0: entered promiscuous mode
Jan 22 09:28:29 np0005592157 NetworkManager[48997]: <info>  [1769092109.3145] manager: (tapb247a422-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.315 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb247a422-e0, col_values=(('external_ids', {'iface-id': '9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.316 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:29Z|00042|binding|INFO|Releasing lport 9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a from this chassis (sb_readonly=0)
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.318 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.318 157426 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.319 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[84a0a1c3-3a0e-4e82-846d-ea8fec6a29ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.320 157426 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: global
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    log         /dev/log local0 debug
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    log-tag     haproxy-metadata-proxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    user        root
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    group       root
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    maxconn     1024
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    pidfile     /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    daemon
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: defaults
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    log global
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    mode http
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    option httplog
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    option dontlognull
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    option http-server-close
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    option forwardfor
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    retries                 3
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    timeout http-request    30s
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    timeout connect         30s
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    timeout client          32s
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    timeout server          32s
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    timeout http-keep-alive 30s
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: listen listener
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    bind 169.254.169.254:80
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]:    http-request add-header X-OVN-Network-ID b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:28:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:29.321 157426 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'env', 'PROCESS_TAG=haproxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b247a422-e88b-4d6e-9b42-d4947ce89ea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.329 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.341 245711 DEBUG nova.compute.manager [req-9614c8c9-eea4-4166-b614-168f3e10d7c3 req-79aae875-3d36-4ada-ad5d-07aab31b37e0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.342 245711 DEBUG oslo_concurrency.lockutils [req-9614c8c9-eea4-4166-b614-168f3e10d7c3 req-79aae875-3d36-4ada-ad5d-07aab31b37e0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.342 245711 DEBUG oslo_concurrency.lockutils [req-9614c8c9-eea4-4166-b614-168f3e10d7c3 req-79aae875-3d36-4ada-ad5d-07aab31b37e0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.343 245711 DEBUG oslo_concurrency.lockutils [req-9614c8c9-eea4-4166-b614-168f3e10d7c3 req-79aae875-3d36-4ada-ad5d-07aab31b37e0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.343 245711 DEBUG nova.compute.manager [req-9614c8c9-eea4-4166-b614-168f3e10d7c3 req-79aae875-3d36-4ada-ad5d-07aab31b37e0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Processing event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 09:28:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:29.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.550 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092109.5494766, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.550 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Started (Lifecycle Event)#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.552 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.555 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.559 245711 INFO nova.virt.libvirt.driver [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance spawned successfully.#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.559 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.641 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.644 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.675 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.675 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092109.549582, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.676 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.680 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.680 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.681 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.681 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.682 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.682 245711 DEBUG nova.virt.libvirt.driver [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:28:29 np0005592157 podman[287184]: 2026-01-22 14:28:29.694233147 +0000 UTC m=+0.060556016 container create 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:28:29 np0005592157 systemd[1]: Started libpod-conmon-88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935.scope.
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.749 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.756 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092109.5554197, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.757 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:28:29 np0005592157 podman[287184]: 2026-01-22 14:28:29.665189445 +0000 UTC m=+0.031512344 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:28:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:28:29 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d616e8c228ea38fa4660e5cd1ba23a9e3c6a2ca94d66b1a2cadb61514fbe6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:29 np0005592157 podman[287184]: 2026-01-22 14:28:29.798615173 +0000 UTC m=+0.164938062 container init 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.800 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.803 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:28:29 np0005592157 podman[287184]: 2026-01-22 14:28:29.803487004 +0000 UTC m=+0.169809873 container start 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:28:29 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [NOTICE]   (287204) : New worker (287206) forked
Jan 22 09:28:29 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [NOTICE]   (287204) : Loading success.
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.832 245711 INFO nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 5.69 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.832 245711 DEBUG nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:29 np0005592157 nova_compute[245707]: 2026-01-22 14:28:29.839 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:28:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 587 KiB/s wr, 6 op/s
Jan 22 09:28:30 np0005592157 nova_compute[245707]: 2026-01-22 14:28:30.046 245711 INFO nova.compute.manager [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 9.98 seconds to build instance.#033[00m
Jan 22 09:28:30 np0005592157 nova_compute[245707]: 2026-01-22 14:28:30.071 245711 DEBUG oslo_concurrency.lockutils [None req-25a772d9-a51b-4874-8798-085acd4e5022 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:30 np0005592157 nova_compute[245707]: 2026-01-22 14:28:30.572 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:31 np0005592157 podman[287215]: 2026-01-22 14:28:31.357875738 +0000 UTC m=+0.089896856 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.440 245711 DEBUG nova.compute.manager [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.440 245711 DEBUG oslo_concurrency.lockutils [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.440 245711 DEBUG oslo_concurrency.lockutils [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.440 245711 DEBUG oslo_concurrency.lockutils [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.440 245711 DEBUG nova.compute.manager [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:31 np0005592157 nova_compute[245707]: 2026-01-22 14:28:31.441 245711 WARNING nova.compute.manager [req-900a779e-28d9-4377-8c72-787f7c46cc6c req-7af203bd-01a0-4256-bf0e-2c824ecc0092 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state None.#033[00m
Jan 22 09:28:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:31.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:31 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 671 KiB/s rd, 13 KiB/s wr, 31 op/s
Jan 22 09:28:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:32 np0005592157 nova_compute[245707]: 2026-01-22 14:28:32.695 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:32 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:32 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:33.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:33 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 22 09:28:34 np0005592157 nova_compute[245707]: 2026-01-22 14:28:34.208 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Check if temp file /var/lib/nova/instances/tmpbphf1dve exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 22 09:28:34 np0005592157 nova_compute[245707]: 2026-01-22 14:28:34.208 245711 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 22 09:28:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:35.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592157 nova_compute[245707]: 2026-01-22 14:28:35.600 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:35 np0005592157 nova_compute[245707]: 2026-01-22 14:28:35.921 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:35 np0005592157 nova_compute[245707]: 2026-01-22 14:28:35.922 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:35 np0005592157 nova_compute[245707]: 2026-01-22 14:28:35.930 245711 INFO nova.compute.rpcapi [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 22 09:28:35 np0005592157 nova_compute[245707]: 2026-01-22 14:28:35.931 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:28:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:28:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:36.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:28:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:37.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:37 np0005592157 nova_compute[245707]: 2026-01-22 14:28:37.696 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:37 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:28:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:39 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:39.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 09:28:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.523 245711 DEBUG nova.compute.manager [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.524 245711 DEBUG oslo_concurrency.lockutils [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.524 245711 DEBUG oslo_concurrency.lockutils [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.525 245711 DEBUG oslo_concurrency.lockutils [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.525 245711 DEBUG nova.compute.manager [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.525 245711 DEBUG nova.compute.manager [req-63af5096-ff70-48e0-be30-cbb24352814d req-8a5197c6-54a2-4cbd-b35e-2aa25c764bcb 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:28:40 np0005592157 nova_compute[245707]: 2026-01-22 14:28:40.603 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:40.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:41.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:41 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.847 245711 INFO nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 5.93 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.848 245711 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.867 245711 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(94620205-fa24-46e6-99ca-3c525c4b9cfe),old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.871 245711 DEBUG nova.objects.instance [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.872 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.875 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.876 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.894 245711 DEBUG nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Find same serial number: pos=1, serial=6e173a8e-fd98-4de4-a470-2c50f67a6d48 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.896 245711 DEBUG nova.virt.libvirt.vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:29Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.896 245711 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.896 245711 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.897 245711 DEBUG nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating guest XML with vif config: <interface type="ethernet">
Jan 22 09:28:41 np0005592157 nova_compute[245707]:  <mac address="fa:16:3e:f9:af:b6"/>
Jan 22 09:28:41 np0005592157 nova_compute[245707]:  <model type="virtio"/>
Jan 22 09:28:41 np0005592157 nova_compute[245707]:  <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:28:41 np0005592157 nova_compute[245707]:  <mtu size="1442"/>
Jan 22 09:28:41 np0005592157 nova_compute[245707]:  <target dev="tap2b1b16d5-1e"/>
Jan 22 09:28:41 np0005592157 nova_compute[245707]: </interface>
Jan 22 09:28:41 np0005592157 nova_compute[245707]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 22 09:28:41 np0005592157 nova_compute[245707]: 2026-01-22 14:28:41.897 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 22 09:28:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.379 245711 DEBUG nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.380 245711 INFO nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 22 09:28:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.461 245711 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 22 09:28:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:42 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.657 245711 DEBUG nova.compute.manager [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.658 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.659 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.659 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.660 245711 DEBUG nova.compute.manager [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.660 245711 WARNING nova.compute.manager [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.660 245711 DEBUG nova.compute.manager [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.661 245711 DEBUG nova.compute.manager [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing instance network info cache due to event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.661 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.661 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.662 245711 DEBUG nova.network.neutron [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.700 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:42.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.966 245711 DEBUG nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 22 09:28:42 np0005592157 nova_compute[245707]: 2026-01-22 14:28:42.967 245711 DEBUG nova.virt.libvirt.migration [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 22 09:28:42 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:42Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:28:42 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:42Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.063 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092123.0628462, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.063 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.087 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.091 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.118 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 22 09:28:43 np0005592157 kernel: tap2b1b16d5-1e (unregistering): left promiscuous mode
Jan 22 09:28:43 np0005592157 NetworkManager[48997]: <info>  [1769092123.2699] device (tap2b1b16d5-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:28:43 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:43Z|00043|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=0)
Jan 22 09:28:43 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:43Z|00044|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down in Southbound
Jan 22 09:28:43 np0005592157 ovn_controller[146940]: 2026-01-22T14:28:43Z|00045|binding|INFO|Removing iface tap2b1b16d5-1e ovn-installed in OVS
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.280 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.282 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.287 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c4fa18b6-ed0f-47ac-8eec-d1399749aa8e'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.290 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.292 157426 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.293 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[6db1a335-0d08-4b01-ada0-142262cdcbc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.294 157426 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace which is not needed anymore#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.311 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:43 np0005592157 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 22 09:28:43 np0005592157 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Consumed 13.176s CPU time.
Jan 22 09:28:43 np0005592157 systemd-machined[211644]: Machine qemu-4-instance-00000012 terminated.
Jan 22 09:28:43 np0005592157 virtqemud[245202]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 09:28:43 np0005592157 virtqemud[245202]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [NOTICE]   (287204) : haproxy version is 2.8.14-c23fe91
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [NOTICE]   (287204) : path to executable is /usr/sbin/haproxy
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [WARNING]  (287204) : Exiting Master process...
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [WARNING]  (287204) : Exiting Master process...
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [ALERT]    (287204) : Current worker (287206) exited with code 143 (Terminated)
Jan 22 09:28:43 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287200]: [WARNING]  (287204) : All workers exited. Exiting... (0)
Jan 22 09:28:43 np0005592157 systemd[1]: libpod-88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935.scope: Deactivated successfully.
Jan 22 09:28:43 np0005592157 podman[287327]: 2026-01-22 14:28:43.437623336 +0000 UTC m=+0.043862902 container died 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.445 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.445 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.445 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 22 09:28:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:43.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935-userdata-shm.mount: Deactivated successfully.
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.469 245711 DEBUG nova.virt.libvirt.guest [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4' (instance-00000012) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 22 09:28:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e2d616e8c228ea38fa4660e5cd1ba23a9e3c6a2ca94d66b1a2cadb61514fbe6b-merged.mount: Deactivated successfully.
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.470 245711 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation has completed#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.470 245711 INFO nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] _post_live_migration() is started..#033[00m
Jan 22 09:28:43 np0005592157 podman[287327]: 2026-01-22 14:28:43.479399655 +0000 UTC m=+0.085639231 container cleanup 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:28:43 np0005592157 systemd[1]: libpod-conmon-88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935.scope: Deactivated successfully.
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.510 245711 DEBUG nova.compute.manager [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.510 245711 DEBUG oslo_concurrency.lockutils [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.510 245711 DEBUG oslo_concurrency.lockutils [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.511 245711 DEBUG oslo_concurrency.lockutils [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.511 245711 DEBUG nova.compute.manager [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.511 245711 DEBUG nova.compute.manager [req-e9602125-47c5-4b37-881f-e4ff97bc3fac req-7f04629c-ec54-42eb-bf36-8c2723bc2987 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:28:43 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:43 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:43 np0005592157 podman[287368]: 2026-01-22 14:28:43.708134492 +0000 UTC m=+0.203723297 container remove 88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.714 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffbbb29-9676-47a5-8285-37584d74fdb1]: (4, ('Thu Jan 22 02:28:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935)\n88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935\nThu Jan 22 02:28:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935)\n88785035cef3cab5fd656b729c568e2a0e95508ce60b94ae23174e55ac926935\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.716 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb1ebd-1177-42ce-b0bd-af14cd6f1c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.717 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.780 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:43 np0005592157 kernel: tapb247a422-e0: left promiscuous mode
Jan 22 09:28:43 np0005592157 nova_compute[245707]: 2026-01-22 14:28:43.798 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.800 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d075ee-1d26-4282-aaf3-ca6cb69a01f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.815 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf4422c-31f7-4804-b83a-b6e52c7f3b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.816 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac66fad-ce64-46dc-a262-28d6af8188da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.832 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[bd82b935-4fd2-450d-b214-9b4c4a40c86b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590405, 'reachable_time': 22957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287387, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 systemd[1]: run-netns-ovnmeta\x2db247a422\x2de88b\x2d4d6e\x2d9b42\x2dd4947ce89ea4.mount: Deactivated successfully.
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.836 157842 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:28:43 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:43.836 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[086892bc-a645-43c5-bbdd-3f125917305b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 22 09:28:44 np0005592157 nova_compute[245707]: 2026-01-22 14:28:44.642 245711 DEBUG nova.network.neutron [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updated VIF entry in instance network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:28:44 np0005592157 nova_compute[245707]: 2026-01-22 14:28:44.643 245711 DEBUG nova.network.neutron [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:44 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:44.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:45.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.521 245711 DEBUG nova.compute.manager [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.522 245711 DEBUG oslo_concurrency.lockutils [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.522 245711 DEBUG oslo_concurrency.lockutils [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.522 245711 DEBUG oslo_concurrency.lockutils [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.522 245711 DEBUG nova.compute.manager [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.522 245711 DEBUG nova.compute.manager [req-fd2f16a2-cb8a-44f1-a4dc-5ebf31004b66 req-482ff390-9e5e-4bfd-ac2c-279b72115c3f 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.529 245711 DEBUG oslo_concurrency.lockutils [req-8fb41e67-b356-4796-a73b-c8f3b4951c75 req-aced0a0c-0557-4443-86ec-3c9173d9cf75 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.605 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.609 245711 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Activated binding for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.609 245711 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.611 245711 DEBUG nova.virt.libvirt.vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:33Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.611 245711 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.613 245711 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.613 245711 DEBUG os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.617 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.618 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b1b16d5-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.619 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.622 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.625 245711 INFO os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.626 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.626 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.627 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.627 245711 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.628 245711 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting instance files /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.629 245711 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deletion of /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del complete#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.656 245711 DEBUG nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.656 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.657 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.657 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.658 245711 DEBUG nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.658 245711 WARNING nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.659 245711 DEBUG nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.659 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.660 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.660 245711 DEBUG oslo_concurrency.lockutils [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.661 245711 DEBUG nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:45 np0005592157 nova_compute[245707]: 2026-01-22 14:28:45.661 245711 WARNING nova.compute.manager [req-61ec3f18-d1b1-4c64-a71f-27f4a0d7d30d req-65a324dc-b182-40da-9e37-86ebd27b8ca5 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:28:45 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 22 09:28:46 np0005592157 nova_compute[245707]: 2026-01-22 14:28:46.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:46 np0005592157 nova_compute[245707]: 2026-01-22 14:28:46.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:28:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:46.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:28:47
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data']
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:28:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:47.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:47.601 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:47.602 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:28:47.602 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:47 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 09:28:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:48.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.866 245711 DEBUG nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.866 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.867 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.868 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.868 245711 DEBUG nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.868 245711 WARNING nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.869 245711 DEBUG nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.869 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.870 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.870 245711 DEBUG oslo_concurrency.lockutils [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.870 245711 DEBUG nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:28:48 np0005592157 nova_compute[245707]: 2026-01-22 14:28:48.871 245711 WARNING nova.compute.manager [req-1985ca1e-182a-4766-820d-a5a76bb9fb29 req-f072a5a5-e429-4749-b41b-f1a4a61a7a41 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:28:49 np0005592157 nova_compute[245707]: 2026-01-22 14:28:49.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:28:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:49.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.640779) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129640836, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1421, "num_deletes": 251, "total_data_size": 2032754, "memory_usage": 2061136, "flush_reason": "Manual Compaction"}
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129852563, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 1979886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48880, "largest_seqno": 50300, "table_properties": {"data_size": 1973541, "index_size": 3356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16179, "raw_average_key_size": 21, "raw_value_size": 1959839, "raw_average_value_size": 2561, "num_data_blocks": 145, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092027, "oldest_key_time": 1769092027, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 211841 microseconds, and 5262 cpu microseconds.
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.852617) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 1979886 bytes OK
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.852638) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.933994) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.934047) EVENT_LOG_v1 {"time_micros": 1769092129934037, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.934071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2026324, prev total WAL file size 2026324, number of live WAL files 2.
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.935093) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(1933KB)], [107(9511KB)]
Jan 22 09:28:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129935127, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11719436, "oldest_snapshot_seqno": -1}
Jan 22 09:28:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 9612 keys, 10082890 bytes, temperature: kUnknown
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092130232190, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10082890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10028059, "index_size": 29695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 259516, "raw_average_key_size": 26, "raw_value_size": 9862058, "raw_average_value_size": 1026, "num_data_blocks": 1122, "num_entries": 9612, "num_filter_entries": 9612, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.232770) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10082890 bytes
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.305806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.4 rd, 33.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10129, records dropped: 517 output_compression: NoCompression
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.305857) EVENT_LOG_v1 {"time_micros": 1769092130305837, "job": 64, "event": "compaction_finished", "compaction_time_micros": 297151, "compaction_time_cpu_micros": 23063, "output_level": 6, "num_output_files": 1, "total_output_size": 10082890, "num_input_records": 10129, "num_output_records": 9612, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092130307050, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092130310664, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:49.934986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.310873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.310884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.310888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.310896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:28:50.310899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592157 nova_compute[245707]: 2026-01-22 14:28:50.607 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:50 np0005592157 nova_compute[245707]: 2026-01-22 14:28:50.620 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:50 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:50.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.246 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:51.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:51 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.776 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.776 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.777 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.875 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.876 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.876 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.877 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:28:51 np0005592157 nova_compute[245707]: 2026-01-22 14:28:51.878 245711 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 22 09:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810589282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.349 245711 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.424 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.425 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.573 245711 WARNING nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.574 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=20.771652221679688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.574 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.575 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.659 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration for instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 22 09:28:52 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.724 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 22 09:28:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:52.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.756 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.756 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.757 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.757 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration 94620205-fa24-46e6-99ca-3c525c4b9cfe is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.758 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.758 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.759 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.759 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:28:52 np0005592157 nova_compute[245707]: 2026-01-22 14:28:52.875 245711 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:28:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820216801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.289 245711 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.298 245711 DEBUG nova.compute.provider_tree [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.327 245711 DEBUG nova.scheduler.client.report [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:28:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:53.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.561 245711 DEBUG nova.compute.resource_tracker [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.561 245711 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.567 245711 INFO nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Jan 22 09:28:53 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.788 245711 INFO nova.scheduler.client.report [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Deleted allocation for migration 94620205-fa24-46e6-99ca-3c525c4b9cfe#033[00m
Jan 22 09:28:53 np0005592157 nova_compute[245707]: 2026-01-22 14:28:53.789 245711 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 22 09:28:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 09:28:54 np0005592157 nova_compute[245707]: 2026-01-22 14:28:54.055 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating tmpfile /var/lib/nova/instances/tmpwmqqt0dz to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 22 09:28:54 np0005592157 nova_compute[245707]: 2026-01-22 14:28:54.186 245711 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 22 09:28:54 np0005592157 nova_compute[245707]: 2026-01-22 14:28:54.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:54.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.459 245711 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 22 09:28:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.506 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.507 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.507 245711 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.608 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:55 np0005592157 nova_compute[245707]: 2026-01-22 14:28:55.622 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 09:28:56 np0005592157 podman[287439]: 2026-01-22 14:28:56.344667268 +0000 UTC m=+0.067609222 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 22 09:28:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:56.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.288 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.289 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.289 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.289 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.289 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.289 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:28:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.471 245711 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.488 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.489 245711 DEBUG os_brick.utils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.490 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.510 286587 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.510 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e05a3f-74e0-4a9a-9082-5751bfd4a665]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.512 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.527 286587 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.527 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ad4e60-04c7-4ec0-adac-1d25258d7c7f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:efea51d9988', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.528 286587 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.543 286587 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.543 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[e32abed6-6806-4d95-b5a0-bbab089e7c5e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.544 286587 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a6e19c-8121-40a3-8044-dde98b2129a4]: (4, 'f2612c2e-5bb2-49d6-9db0-33d2b0e700a7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.544 245711 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.576 245711 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.578 245711 DEBUG os_brick.initiator.connectors.lightos [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.579 245711 DEBUG os_brick.initiator.connectors.lightos [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.579 245711 DEBUG os_brick.initiator.connectors.lightos [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 22 09:28:57 np0005592157 nova_compute[245707]: 2026-01-22 14:28:57.579 245711 DEBUG os_brick.utils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:efea51d9988', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'f2612c2e-5bb2-49d6-9db0-33d2b0e700a7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 22 09:28:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:57 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 09:28:58 np0005592157 nova_compute[245707]: 2026-01-22 14:28:58.284 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:58 np0005592157 nova_compute[245707]: 2026-01-22 14:28:58.444 245711 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092123.4432468, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:58 np0005592157 nova_compute[245707]: 2026-01-22 14:28:58.444 245711 INFO nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:28:58 np0005592157 nova_compute[245707]: 2026-01-22 14:28:58.541 245711 DEBUG nova.compute.manager [None req-02792532-6122-4d58-8375-3ce3d033d32e - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:58 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:28:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.637 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='430e38ad-b39f-4ad2-a8ef-a7940bd63b9e'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.638 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating instance directory: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.639 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Ensure instance console log exists: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.639 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.641 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.642 245711 DEBUG nova.virt.libvirt.vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:51Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.643 245711 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.643 245711 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.644 245711 DEBUG os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.644 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.645 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.645 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.648 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.648 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b1b16d5-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.649 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b1b16d5-1e, col_values=(('external_ids', {'iface-id': '2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:af:b6', 'vm-uuid': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.650 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:59 np0005592157 NetworkManager[48997]: <info>  [1769092139.6521] manager: (tap2b1b16d5-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.652 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.656 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.657 245711 INFO os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.660 245711 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 22 09:28:59 np0005592157 nova_compute[245707]: 2026-01-22 14:28:59.661 245711 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='430e38ad-b39f-4ad2-a8ef-a7940bd63b9e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 22 09:28:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 09:29:00 np0005592157 nova_compute[245707]: 2026-01-22 14:29:00.609 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:00.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.332 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.333 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.333 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.333 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.334 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.408 245711 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 22 09:29:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.739 245711 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='430e38ad-b39f-4ad2-a8ef-a7940bd63b9e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 22 09:29:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270814979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.771 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.871 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592157 nova_compute[245707]: 2026-01-22 14:29:01.871 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592157 systemd[1]: Starting libvirt proxy daemon...
Jan 22 09:29:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Jan 22 09:29:01 np0005592157 systemd[1]: Started libvirt proxy daemon.
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.031 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.032 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4636MB free_disk=20.771652221679688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.033 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.033 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:02 np0005592157 podman[287542]: 2026-01-22 14:29:02.049671867 +0000 UTC m=+0.081310243 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:29:02 np0005592157 kernel: tap2b1b16d5-1e: entered promiscuous mode
Jan 22 09:29:02 np0005592157 NetworkManager[48997]: <info>  [1769092142.0996] manager: (tap2b1b16d5-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.101 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:02 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:02Z|00046|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this additional chassis.
Jan 22 09:29:02 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:02Z|00047|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:29:02 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:02Z|00048|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b ovn-installed in OVS
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.116 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.119 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:02 np0005592157 systemd-udevd[287597]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:29:02 np0005592157 systemd-machined[211644]: New machine qemu-5-instance-00000012.
Jan 22 09:29:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:02 np0005592157 NetworkManager[48997]: <info>  [1769092142.1416] device (tap2b1b16d5-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:29:02 np0005592157 NetworkManager[48997]: <info>  [1769092142.1424] device (tap2b1b16d5-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.142 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Migration for instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 22 09:29:02 np0005592157 systemd[1]: Started Virtual Machine qemu-5-instance-00000012.
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.235 245711 INFO nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating resource usage from migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.236 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Starting to track incoming migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2 with flavor 9033f773-5da0-41ea-80ee-6af3a54f1e68 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.285 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.285 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.304 245711 WARNING nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.304 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.304 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.304 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.304 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.305 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:29:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.518 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:02.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.856 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092142.8565152, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.857 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Started (Lifecycle Event)#033[00m
Jan 22 09:29:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182985443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.946 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:02 np0005592157 nova_compute[245707]: 2026-01-22 14:29:02.951 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.067 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.154 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.187 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.187 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:03 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.465 245711 DEBUG nova.virt.driver [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] Emitting event <LifecycleEvent: 1769092143.4655232, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.466 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:29:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.490 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.493 245711 DEBUG nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:29:03 np0005592157 nova_compute[245707]: 2026-01-22 14:29:03.544 245711 INFO nova.compute.manager [None req-65bc29ba-3958-463e-86bb-0d8de916a400 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 22 09:29:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:29:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010321427740554626 of space, bias 1.0, pg target 3.0964283221663877 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002160505944072213 of space, bias 1.0, pg target 0.6416702653894472 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:29:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:29:04 np0005592157 nova_compute[245707]: 2026-01-22 14:29:04.657 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:04 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:04Z|00049|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this chassis.
Jan 22 09:29:04 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:04Z|00050|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:29:04 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:04Z|00051|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b up in Southbound
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.816 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '21', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.817 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 bound to our chassis#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.820 157426 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b247a422-e88b-4d6e-9b42-d4947ce89ea4#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.836 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[efb340e0-6ecd-46c9-8075-15de2bfcfcee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.837 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb247a422-e1 in ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.839 264865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb247a422-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.839 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[da96a715-3cd8-4d07-b743-a0ba76cb95a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.839 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[23739fb6-40e2-4b13-a2f8-f479dd64d7da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.855 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[6df264ec-55e8-4619-8b93-631c8521a997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.869 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[7be6a38d-2d24-4e36-9283-89edcabd22dd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.894 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe1adde-6ece-42fe-a1fa-62e79a748e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.900 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe42262-3748-4e80-906c-1e8b8825f61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 NetworkManager[48997]: <info>  [1769092144.9010] manager: (tapb247a422-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.931 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc38f47-ae3d-4c29-85d5-209412b86c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.934 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[68867e60-50f0-4d04-bef2-434a520abd1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 NetworkManager[48997]: <info>  [1769092144.9567] device (tapb247a422-e0): carrier: link connected
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.962 264880 DEBUG oslo.privsep.daemon [-] privsep: reply[4b826400-720c-45a5-98d4-343c1429dcf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.980 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[18b0a10c-7ac3-4441-a1ec-ecf811bf46f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593989, 'reachable_time': 38781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287696, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:04.996 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[2313f063-3d92-4c15-9add-36bb62c9adf8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:2b35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593989, 'tstamp': 593989}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287697, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.013 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[f65970a2-621b-4fdf-baeb-43512132f702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593989, 'reachable_time': 38781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287698, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.026 245711 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Post operation of migration started#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.046 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[93c7734b-2690-42b3-af7a-c3f76e9a8e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.115 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[9a00fa6f-f197-4f71-a5bf-985418057e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.116 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.116 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.116 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb247a422-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.118 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 NetworkManager[48997]: <info>  [1769092145.1186] manager: (tapb247a422-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 22 09:29:05 np0005592157 kernel: tapb247a422-e0: entered promiscuous mode
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.121 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.122 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb247a422-e0, col_values=(('external_ids', {'iface-id': '9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.123 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:05Z|00052|binding|INFO|Releasing lport 9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a from this chassis (sb_readonly=0)
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.124 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.125 157426 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.126 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[b558b8cf-a64d-4c37-bbe2-9babe49fd15b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.127 157426 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: global
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    log         /dev/log local0 debug
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    log-tag     haproxy-metadata-proxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    user        root
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    group       root
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    maxconn     1024
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    pidfile     /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    daemon
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: defaults
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    log global
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    mode http
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    option httplog
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    option dontlognull
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    option http-server-close
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    option forwardfor
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    retries                 3
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    timeout http-request    30s
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    timeout connect         30s
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    timeout client          32s
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    timeout server          32s
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    timeout http-keep-alive 30s
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: listen listener
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    bind 169.254.169.254:80
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]:    http-request add-header X-OVN-Network-ID b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:29:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:05.128 157426 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'env', 'PROCESS_TAG=haproxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b247a422-e88b-4d6e-9b42-d4947ce89ea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.136 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.187 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.390 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.390 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.390 245711 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:29:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:05 np0005592157 podman[287731]: 2026-01-22 14:29:05.508133291 +0000 UTC m=+0.060071515 container create d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:29:05 np0005592157 systemd[1]: Started libpod-conmon-d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc.scope.
Jan 22 09:29:05 np0005592157 podman[287731]: 2026-01-22 14:29:05.475008047 +0000 UTC m=+0.026946341 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:29:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820bc4511085fe8a42c91a015f0d64a55818968eb1381d132f7f79bc4787818c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:05 np0005592157 podman[287731]: 2026-01-22 14:29:05.610917516 +0000 UTC m=+0.162855750 container init d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 22 09:29:05 np0005592157 nova_compute[245707]: 2026-01-22 14:29:05.611 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592157 podman[287731]: 2026-01-22 14:29:05.618274549 +0000 UTC m=+0.170212763 container start d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:29:05 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [NOTICE]   (287751) : New worker (287753) forked
Jan 22 09:29:05 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [NOTICE]   (287751) : Loading success.
Jan 22 09:29:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 09:29:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:06.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:07 np0005592157 nova_compute[245707]: 2026-01-22 14:29:07.443 245711 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:29:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:07.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:07 np0005592157 nova_compute[245707]: 2026-01-22 14:29:07.756 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:29:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 09:29:07 np0005592157 nova_compute[245707]: 2026-01-22 14:29:07.994 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:07 np0005592157 nova_compute[245707]: 2026-01-22 14:29:07.995 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:07 np0005592157 nova_compute[245707]: 2026-01-22 14:29:07.995 245711 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:08 np0005592157 nova_compute[245707]: 2026-01-22 14:29:08.000 245711 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 22 09:29:08 np0005592157 virtqemud[245202]: Domain id=5 name='instance-00000012' uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 is tainted: custom-monitor
Jan 22 09:29:08 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:08.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:09 np0005592157 nova_compute[245707]: 2026-01-22 14:29:09.006 245711 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 22 09:29:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:09.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:09 np0005592157 nova_compute[245707]: 2026-01-22 14:29:09.658 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 09:29:10 np0005592157 nova_compute[245707]: 2026-01-22 14:29:10.012 245711 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 22 09:29:10 np0005592157 nova_compute[245707]: 2026-01-22 14:29:10.017 245711 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:10 np0005592157 nova_compute[245707]: 2026-01-22 14:29:10.093 245711 DEBUG nova.objects.instance [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 22 09:29:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592157 nova_compute[245707]: 2026-01-22 14:29:10.654 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:10.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:11 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:11.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 09:29:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.602 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.602 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.603 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.603 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.604 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.605 245711 INFO nova.compute.manager [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Terminating instance#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.607 245711 DEBUG nova.compute.manager [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:29:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:12 np0005592157 kernel: tap2b1b16d5-1e (unregistering): left promiscuous mode
Jan 22 09:29:12 np0005592157 NetworkManager[48997]: <info>  [1769092152.6583] device (tap2b1b16d5-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00053|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=0)
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00054|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down in Southbound
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00055|binding|INFO|Removing iface tap2b1b16d5-1e ovn-installed in OVS
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.667 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.686 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.687 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.689 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.691 157426 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.692 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[bca0eede-198d-4ee7-b19f-6c63d0c16508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.692 157426 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace which is not needed anymore#033[00m
Jan 22 09:29:12 np0005592157 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 22 09:29:12 np0005592157 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000012.scope: Consumed 1.493s CPU time.
Jan 22 09:29:12 np0005592157 systemd-machined[211644]: Machine qemu-5-instance-00000012 terminated.
Jan 22 09:29:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:12 np0005592157 kernel: tap2b1b16d5-1e: entered promiscuous mode
Jan 22 09:29:12 np0005592157 NetworkManager[48997]: <info>  [1769092152.8245] manager: (tap2b1b16d5-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 22 09:29:12 np0005592157 systemd-udevd[287768]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:29:12 np0005592157 kernel: tap2b1b16d5-1e (unregistering): left promiscuous mode
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00056|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this chassis.
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00057|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.864 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: hostname: compute-0
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [NOTICE]   (287751) : haproxy version is 2.8.14-c23fe91
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [NOTICE]   (287751) : path to executable is /usr/sbin/haproxy
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [WARNING]  (287751) : Exiting Master process...
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [WARNING]  (287751) : Exiting Master process...
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [ALERT]    (287751) : Current worker (287753) exited with code 143 (Terminated)
Jan 22 09:29:12 np0005592157 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[287747]: [WARNING]  (287751) : All workers exited. Exiting... (0)
Jan 22 09:29:12 np0005592157 systemd[1]: libpod-d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc.scope: Deactivated successfully.
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.876 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:12 np0005592157 podman[287789]: 2026-01-22 14:29:12.878279226 +0000 UTC m=+0.090383468 container died d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00058|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b ovn-installed in OVS
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00059|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b up in Southbound
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00060|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=1)
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00061|if_status|INFO|Not setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down as sb is readonly
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.885 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00062|binding|INFO|Removing iface tap2b1b16d5-1e ovn-installed in OVS
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.887 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.888 245711 INFO nova.virt.libvirt.driver [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance destroyed successfully.#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.889 245711 DEBUG nova.objects.instance [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lazy-loading 'resources' on Instance uuid 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.898 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 virtnodedevd[245570]: ethtool ioctl error on tap2b1b16d5-1e: No such device
Jan 22 09:29:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc-userdata-shm.mount: Deactivated successfully.
Jan 22 09:29:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-820bc4511085fe8a42c91a015f0d64a55818968eb1381d132f7f79bc4787818c-merged.mount: Deactivated successfully.
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00063|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=0)
Jan 22 09:29:12 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:12Z|00064|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down in Southbound
Jan 22 09:29:12 np0005592157 podman[287789]: 2026-01-22 14:29:12.920580228 +0000 UTC m=+0.132684470 container cleanup d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.921 245711 DEBUG nova.virt.libvirt.vif [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:29:10Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.921 245711 DEBUG nova.network.os_vif_util [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.922 245711 DEBUG nova.network.os_vif_util [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.922 245711 DEBUG os_vif [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.923 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.924 245711 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b1b16d5-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.924 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4af189b640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.925 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.926 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.928 245711 INFO os_vif [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:29:12 np0005592157 systemd[1]: libpod-conmon-d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc.scope: Deactivated successfully.
Jan 22 09:29:12 np0005592157 podman[287837]: 2026-01-22 14:29:12.982162849 +0000 UTC m=+0.040333984 container remove d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.988 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[89efb9ca-9f60-4d09-a8c4-f9d185967cb0]: (4, ('Thu Jan 22 02:29:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc)\nd5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc\nThu Jan 22 02:29:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (d5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc)\nd5a3d51f7e4aa664528ee09d2adb1a57352cc87997f471f35ff11d3cd658f5bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.990 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[820dbeb5-2001-472c-98c6-44eddf5d94a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.991 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.992 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 kernel: tapb247a422-e0: left promiscuous mode
Jan 22 09:29:12 np0005592157 nova_compute[245707]: 2026-01-22 14:29:12.994 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:12.996 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[e024b654-319c-4e6b-8433-fd88407e55f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.006 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.012 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[3148bcff-4555-4674-9ce2-22f7542bb196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.013 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bcbbfc-6d14-45c8-be8d-b2265d70b77a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.028 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[04b440a0-5b3c-41b5-b43b-567599f5ce76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593983, 'reachable_time': 15246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287867, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.030 157842 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.031 157842 DEBUG oslo.privsep.daemon [-] privsep: reply[31f92601-2acd-4917-b612-6383f15c423f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.032 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis#033[00m
Jan 22 09:29:13 np0005592157 systemd[1]: run-netns-ovnmeta\x2db247a422\x2de88b\x2d4d6e\x2d9b42\x2dd4947ce89ea4.mount: Deactivated successfully.
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.033 157426 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.034 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6ca1ef-5759-4784-b26f-144a105d652e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.034 157426 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.036 157426 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:29:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:13.036 264865 DEBUG oslo.privsep.daemon [-] privsep: reply[43088e03-6c8b-4594-a505-c28e53d42c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.136 245711 INFO nova.virt.libvirt.driver [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting instance files /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.137 245711 INFO nova.virt.libvirt.driver [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deletion of /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del complete#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.218 245711 DEBUG nova.compute.manager [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.218 245711 DEBUG oslo_concurrency.lockutils [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.219 245711 DEBUG oslo_concurrency.lockutils [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.219 245711 DEBUG oslo_concurrency.lockutils [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.220 245711 DEBUG nova.compute.manager [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.220 245711 DEBUG nova.compute.manager [req-58809242-d0f8-436e-9c65-bc7f27d68ed8 req-d48f04c8-b555-4db4-8200-17874d81ad1b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.240 245711 INFO nova.compute.manager [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.241 245711 DEBUG oslo.service.loopingcall [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.241 245711 DEBUG nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.242 245711 DEBUG nova.network.neutron [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:29:13 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:13Z|00065|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:29:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:13 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:13 np0005592157 nova_compute[245707]: 2026-01-22 14:29:13.948 245711 DEBUG nova.network.neutron [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:29:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.022 245711 INFO nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 0.78 seconds to deallocate network for instance.#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.043 245711 DEBUG nova.compute.manager [req-37d76d4f-5ff7-4cc2-a62b-971aa12694cc req-64365f43-ad18-4514-8f5d-7bf093b8a4c0 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-deleted-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.393 245711 INFO nova.compute.manager [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 0.37 seconds to detach 1 volumes for instance.#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.394 245711 DEBUG nova.compute.manager [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting volume: 6e173a8e-fd98-4de4-a470-2c50f67a6d48 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 22 09:29:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:29:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.825 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.825 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.830 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:14 np0005592157 nova_compute[245707]: 2026-01-22 14:29:14.891 245711 INFO nova.scheduler.client.report [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Deleted allocations for instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.108 245711 DEBUG oslo_concurrency.lockutils [None req-4e6ce74f-6bc1-4811-b631-901bd15dbcf7 32df6d966d7540dd851bf51a1148be65 6b4b5b635cbf4888966d80692b78281f - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.367 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.368 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.368 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.368 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.369 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.369 245711 WARNING nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state deleted and task_state None.#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.369 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.369 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.369 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.370 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.370 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.370 245711 WARNING nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state deleted and task_state None.#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.371 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.371 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.371 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.371 245711 DEBUG oslo_concurrency.lockutils [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.371 245711 DEBUG nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.372 245711 WARNING nova.compute.manager [req-f5c3faea-4ea2-4a10-871e-c512c963593c req-8a07053d-434b-4ac2-810d-86f944442bfd 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state deleted and task_state None.#033[00m
Jan 22 09:29:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:15.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:15 np0005592157 nova_compute[245707]: 2026-01-22 14:29:15.657 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Jan 22 09:29:16 np0005592157 nova_compute[245707]: 2026-01-22 14:29:16.337 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:16 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:16.338 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:16 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:16.341 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:17.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:17 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:17 np0005592157 nova_compute[245707]: 2026-01-22 14:29:17.969 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 938 B/s wr, 27 op/s
Jan 22 09:29:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:18.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:18 np0005592157 nova_compute[245707]: 2026-01-22 14:29:18.895 245711 DEBUG nova.compute.manager [req-c50f5027-ea80-4ef7-9330-d0e96780fc0d req-5b1de498-d38d-46ce-9650-f8b52a5e6aae 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Received event network-vif-deleted-d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:18 np0005592157 nova_compute[245707]: 2026-01-22 14:29:18.896 245711 INFO nova.compute.manager [req-c50f5027-ea80-4ef7-9330-d0e96780fc0d req-5b1de498-d38d-46ce-9650-f8b52a5e6aae 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Neutron deleted interface d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f; detaching it from the instance and deleting it from the info cache#033[00m
Jan 22 09:29:18 np0005592157 nova_compute[245707]: 2026-01-22 14:29:18.897 245711 DEBUG nova.network.neutron [req-c50f5027-ea80-4ef7-9330-d0e96780fc0d req-5b1de498-d38d-46ce-9650-f8b52a5e6aae 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:29:18 np0005592157 nova_compute[245707]: 2026-01-22 14:29:18.916 245711 DEBUG nova.compute.manager [req-c50f5027-ea80-4ef7-9330-d0e96780fc0d req-5b1de498-d38d-46ce-9650-f8b52a5e6aae 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Detach interface failed, port_id=d62be26a-cea9-4e5b-8dbc-4ca3d1cd584f, reason: Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 22 09:29:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:19.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:19 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 09:29:20 np0005592157 nova_compute[245707]: 2026-01-22 14:29:20.704 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:20.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:21.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:21 np0005592157 nova_compute[245707]: 2026-01-22 14:29:21.561 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:21 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 09:29:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:22.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:23 np0005592157 nova_compute[245707]: 2026-01-22 14:29:23.014 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:23.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 09:29:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:24.345 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:24.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:25.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:25 np0005592157 nova_compute[245707]: 2026-01-22 14:29:25.707 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:26.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cd79966a-6302-4bf4-9733-bced11a1a34d does not exist
Jan 22 09:29:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ef21b759-f9fc-4986-873d-ec33028cf33c does not exist
Jan 22 09:29:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 38fb7055-90ec-4f72-a607-00c0236d4e5b does not exist
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:29:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:29:27 np0005592157 podman[288083]: 2026-01-22 14:29:27.084501792 +0000 UTC m=+0.060785492 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:27.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.518917683 +0000 UTC m=+0.037586286 container create b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:29:27 np0005592157 systemd[1]: Started libpod-conmon-b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd.scope.
Jan 22 09:29:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.500744011 +0000 UTC m=+0.019412594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.607445694 +0000 UTC m=+0.126114307 container init b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.613999437 +0000 UTC m=+0.132668010 container start b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.617625967 +0000 UTC m=+0.136294630 container attach b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:29:27 np0005592157 quirky_elion[288233]: 167 167
Jan 22 09:29:27 np0005592157 systemd[1]: libpod-b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd.scope: Deactivated successfully.
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.621725999 +0000 UTC m=+0.140394582 container died b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:29:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b4465fc4b9fa5183f29064422a9f14ba8c5916ddd45f2faeaf4b7562c0f31583-merged.mount: Deactivated successfully.
Jan 22 09:29:27 np0005592157 podman[288216]: 2026-01-22 14:29:27.659342894 +0000 UTC m=+0.178011477 container remove b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:29:27 np0005592157 systemd[1]: libpod-conmon-b965149104e3911b78c39a46f13243c116cd4be825c0ec791dfb3764a42665bd.scope: Deactivated successfully.
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:29:27 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:27 np0005592157 podman[288256]: 2026-01-22 14:29:27.87518957 +0000 UTC m=+0.056441844 container create 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:29:27 np0005592157 nova_compute[245707]: 2026-01-22 14:29:27.891 245711 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092152.8902678, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:29:27 np0005592157 nova_compute[245707]: 2026-01-22 14:29:27.891 245711 INFO nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:29:27 np0005592157 systemd[1]: Started libpod-conmon-2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4.scope.
Jan 22 09:29:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:27 np0005592157 podman[288256]: 2026-01-22 14:29:27.846512487 +0000 UTC m=+0.027764831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:27 np0005592157 nova_compute[245707]: 2026-01-22 14:29:27.945 245711 DEBUG nova.compute.manager [None req-81500821-63bc-47df-a0bc-63fabcb855a9 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:27 np0005592157 podman[288256]: 2026-01-22 14:29:27.949610951 +0000 UTC m=+0.130863235 container init 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:29:27 np0005592157 podman[288256]: 2026-01-22 14:29:27.959552168 +0000 UTC m=+0.140804422 container start 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:29:27 np0005592157 podman[288256]: 2026-01-22 14:29:27.9632666 +0000 UTC m=+0.144518854 container attach 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 09:29:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 09:29:28 np0005592157 nova_compute[245707]: 2026-01-22 14:29:28.016 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:28 np0005592157 nifty_mclaren[288272]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:29:28 np0005592157 nifty_mclaren[288272]: --> relative data size: 1.0
Jan 22 09:29:28 np0005592157 nifty_mclaren[288272]: --> All data devices are unavailable
Jan 22 09:29:28 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:28 np0005592157 systemd[1]: libpod-2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4.scope: Deactivated successfully.
Jan 22 09:29:28 np0005592157 conmon[288272]: conmon 2c8a53f0fe02f7f40bb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4.scope/container/memory.events
Jan 22 09:29:28 np0005592157 podman[288256]: 2026-01-22 14:29:28.745668472 +0000 UTC m=+0.926920716 container died 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 09:29:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-51039fb0ad91acd43399f2c7f5b8e12414b3097196ec3e4003b37940535c3187-merged.mount: Deactivated successfully.
Jan 22 09:29:28 np0005592157 podman[288256]: 2026-01-22 14:29:28.797178422 +0000 UTC m=+0.978430676 container remove 2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:29:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:28 np0005592157 systemd[1]: libpod-conmon-2c8a53f0fe02f7f40bb5421cdd393e5133370836504f53f0957c47dfead5f0f4.scope: Deactivated successfully.
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.348637433 +0000 UTC m=+0.043426351 container create 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:29:29 np0005592157 systemd[1]: Started libpod-conmon-2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576.scope.
Jan 22 09:29:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.417839373 +0000 UTC m=+0.112628311 container init 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.425705409 +0000 UTC m=+0.120494337 container start 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.333017084 +0000 UTC m=+0.027806022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:29 np0005592157 confident_bartik[288457]: 167 167
Jan 22 09:29:29 np0005592157 systemd[1]: libpod-2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576.scope: Deactivated successfully.
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.430316023 +0000 UTC m=+0.125104941 container attach 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.430683793 +0000 UTC m=+0.125472721 container died 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:29:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7ebb087bf6465147bf2f7f021be238cc9eb0a17572d8ede7ad2f3718562fdf01-merged.mount: Deactivated successfully.
Jan 22 09:29:29 np0005592157 podman[288441]: 2026-01-22 14:29:29.46598301 +0000 UTC m=+0.160771928 container remove 2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:29:29 np0005592157 systemd[1]: libpod-conmon-2f1c93e78b08ed87d028dec8e7ad331d3e4fe4701471545d37fdbd967902a576.scope: Deactivated successfully.
Jan 22 09:29:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:29.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:29 np0005592157 podman[288482]: 2026-01-22 14:29:29.627146647 +0000 UTC m=+0.045350718 container create f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:29:29 np0005592157 systemd[1]: Started libpod-conmon-f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d.scope.
Jan 22 09:29:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea758221577cdaf288935b32a53a4564e4824ea6db2b5359c5764ac96152e1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea758221577cdaf288935b32a53a4564e4824ea6db2b5359c5764ac96152e1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea758221577cdaf288935b32a53a4564e4824ea6db2b5359c5764ac96152e1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:29 np0005592157 podman[288482]: 2026-01-22 14:29:29.611235212 +0000 UTC m=+0.029439303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea758221577cdaf288935b32a53a4564e4824ea6db2b5359c5764ac96152e1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:29 np0005592157 podman[288482]: 2026-01-22 14:29:29.723674577 +0000 UTC m=+0.141878658 container init f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:29:29 np0005592157 podman[288482]: 2026-01-22 14:29:29.731863051 +0000 UTC m=+0.150067122 container start f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:29:29 np0005592157 podman[288482]: 2026-01-22 14:29:29.736960807 +0000 UTC m=+0.155164898 container attach f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:29:29 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]: {
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:    "0": [
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:        {
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "devices": [
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "/dev/loop3"
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            ],
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "lv_name": "ceph_lv0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "lv_size": "7511998464",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "name": "ceph_lv0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "tags": {
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.cluster_name": "ceph",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.crush_device_class": "",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.encrypted": "0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.osd_id": "0",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.type": "block",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:                "ceph.vdo": "0"
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            },
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "type": "block",
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:            "vg_name": "ceph_vg0"
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:        }
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]:    ]
Jan 22 09:29:30 np0005592157 inspiring_newton[288499]: }
Jan 22 09:29:30 np0005592157 systemd[1]: libpod-f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d.scope: Deactivated successfully.
Jan 22 09:29:30 np0005592157 podman[288482]: 2026-01-22 14:29:30.467713286 +0000 UTC m=+0.885917397 container died f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:29:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ea758221577cdaf288935b32a53a4564e4824ea6db2b5359c5764ac96152e1d3-merged.mount: Deactivated successfully.
Jan 22 09:29:30 np0005592157 podman[288482]: 2026-01-22 14:29:30.530995499 +0000 UTC m=+0.949199570 container remove f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:29:30 np0005592157 systemd[1]: libpod-conmon-f6e93731b7915460c42bd34e6ad627d92d60b60aa505e4f6398df9896d572b5d.scope: Deactivated successfully.
Jan 22 09:29:30 np0005592157 nova_compute[245707]: 2026-01-22 14:29:30.708 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:30 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.089455453 +0000 UTC m=+0.039998095 container create bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:29:31 np0005592157 systemd[1]: Started libpod-conmon-bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28.scope.
Jan 22 09:29:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.15890641 +0000 UTC m=+0.109449062 container init bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.166412547 +0000 UTC m=+0.116955189 container start bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.170113689 +0000 UTC m=+0.120656331 container attach bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.07525453 +0000 UTC m=+0.025797192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:31 np0005592157 systemd[1]: libpod-bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28.scope: Deactivated successfully.
Jan 22 09:29:31 np0005592157 stoic_chaplygin[288677]: 167 167
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.171295838 +0000 UTC m=+0.121838480 container died bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 22 09:29:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5061da41ab211cbb6e48f8ad5fc8569d785395a8ed6cbf718df18661fe667930-merged.mount: Deactivated successfully.
Jan 22 09:29:31 np0005592157 podman[288661]: 2026-01-22 14:29:31.224721246 +0000 UTC m=+0.175263888 container remove bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:29:31 np0005592157 systemd[1]: libpod-conmon-bf9c5ffc6d6ec9d7dee5a347fab65f58bbaa2b98a85192c659c1c23dfb61aa28.scope: Deactivated successfully.
Jan 22 09:29:31 np0005592157 podman[288701]: 2026-01-22 14:29:31.387258678 +0000 UTC m=+0.041778320 container create 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:29:31 np0005592157 systemd[1]: Started libpod-conmon-739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737.scope.
Jan 22 09:29:31 np0005592157 podman[288701]: 2026-01-22 14:29:31.368476161 +0000 UTC m=+0.022995823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:29:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:29:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e798351cd89786fd0ba1964522385d85659845a1f416f3690fdf3bdeac6607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e798351cd89786fd0ba1964522385d85659845a1f416f3690fdf3bdeac6607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e798351cd89786fd0ba1964522385d85659845a1f416f3690fdf3bdeac6607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e798351cd89786fd0ba1964522385d85659845a1f416f3690fdf3bdeac6607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:29:31 np0005592157 podman[288701]: 2026-01-22 14:29:31.498545344 +0000 UTC m=+0.153065006 container init 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:29:31 np0005592157 podman[288701]: 2026-01-22 14:29:31.504249566 +0000 UTC m=+0.158769208 container start 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:29:31 np0005592157 podman[288701]: 2026-01-22 14:29:31.508232845 +0000 UTC m=+0.162752487 container attach 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:29:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:31 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:32 np0005592157 podman[288725]: 2026-01-22 14:29:32.355969721 +0000 UTC m=+0.094580153 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]: {
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:        "osd_id": 0,
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:        "type": "bluestore"
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]:    }
Jan 22 09:29:32 np0005592157 hardcore_perlman[288717]: }
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:32 np0005592157 systemd[1]: libpod-739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737.scope: Deactivated successfully.
Jan 22 09:29:32 np0005592157 podman[288765]: 2026-01-22 14:29:32.447708422 +0000 UTC m=+0.023403853 container died 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:29:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-15e798351cd89786fd0ba1964522385d85659845a1f416f3690fdf3bdeac6607-merged.mount: Deactivated successfully.
Jan 22 09:29:32 np0005592157 podman[288765]: 2026-01-22 14:29:32.502915574 +0000 UTC m=+0.078610995 container remove 739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:29:32 np0005592157 systemd[1]: libpod-conmon-739ddf85fbd0d786a6e728c4beaf9e2b42cccb3dc7e341e8d867f506d05ff737.scope: Deactivated successfully.
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d5a1d13d-71f5-4ba8-b0ad-4c75d4f56049 does not exist
Jan 22 09:29:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4e8291e6-ebc1-42c9-b594-e9cf19a1595f does not exist
Jan 22 09:29:32 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ffa34dec-1133-47dd-9d8a-adfce5f9edee does not exist
Jan 22 09:29:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:32.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:33 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:33 np0005592157 nova_compute[245707]: 2026-01-22 14:29:33.018 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:29:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:33.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:29:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:34 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:34 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:35 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:35 np0005592157 nova_compute[245707]: 2026-01-22 14:29:35.709 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:36 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:37.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:38 np0005592157 nova_compute[245707]: 2026-01-22 14:29:38.064 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:38 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:39 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:40 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:40 np0005592157 nova_compute[245707]: 2026-01-22 14:29:40.734 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:29:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:40.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:29:41 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:42 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:43 np0005592157 nova_compute[245707]: 2026-01-22 14:29:43.112 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:43 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:43 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:44 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:45.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:45 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:45 np0005592157 nova_compute[245707]: 2026-01-22 14:29:45.736 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:29:46 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:47 np0005592157 nova_compute[245707]: 2026-01-22 14:29:47.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:47 np0005592157 nova_compute[245707]: 2026-01-22 14:29:47.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:29:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:29:47
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:29:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:47.603 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:47.604 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:29:47.604 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:47 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:47 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:48 np0005592157 nova_compute[245707]: 2026-01-22 14:29:48.115 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:50 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:50 np0005592157 nova_compute[245707]: 2026-01-22 14:29:50.739 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:51 np0005592157 nova_compute[245707]: 2026-01-22 14:29:51.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:51 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:52 np0005592157 nova_compute[245707]: 2026-01-22 14:29:52.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:52 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:53 np0005592157 nova_compute[245707]: 2026-01-22 14:29:53.151 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:53 np0005592157 nova_compute[245707]: 2026-01-22 14:29:53.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:53 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:53 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:54 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:55.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:55 np0005592157 nova_compute[245707]: 2026-01-22 14:29:55.776 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:55 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:56 np0005592157 nova_compute[245707]: 2026-01-22 14:29:56.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:57 np0005592157 podman[288893]: 2026-01-22 14:29:57.313111925 +0000 UTC m=+0.050292901 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:29:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:57.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:57 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:29:58 np0005592157 nova_compute[245707]: 2026-01-22 14:29:58.154 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:58 np0005592157 nova_compute[245707]: 2026-01-22 14:29:58.239 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:58 np0005592157 ovn_controller[146940]: 2026-01-22T14:29:58Z|00066|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 22 09:29:58 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.322 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.322 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.322 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.322 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.323 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:29:59 np0005592157 nova_compute[245707]: 2026-01-22 14:29:59.323 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:29:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:29:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:00 np0005592157 nova_compute[245707]: 2026-01-22 14:30:00.778 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:01 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:02 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:02 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.155 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.327 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.328 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:03 np0005592157 podman[288965]: 2026-01-22 14:30:03.341236868 +0000 UTC m=+0.080673817 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.374 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.374 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.374 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.375 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.375 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901878159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.811 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.897 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:03 np0005592157 nova_compute[245707]: 2026-01-22 14:30:03.898 245711 DEBUG nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:03 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.061 245711 WARNING nova.virt.libvirt.driver [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.064 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=20.77179718017578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.064 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.065 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.184 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 18becd7f-5901-49d8-87eb-548e630001aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.185 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 1089392f-9bda-4904-9370-95fc2c3dd7c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.186 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance b8bec212-84ad-47fd-9608-2cc1999da6c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.186 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.186 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Instance df283133-db55-4a7e-a651-12dd25bae88e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.187 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.187 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010314521100874744 of space, bias 1.0, pg target 3.0943563302624235 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:30:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.338 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235649745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.746 245711 DEBUG oslo_concurrency.processutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.752 245711 DEBUG nova.compute.provider_tree [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.784 245711 DEBUG nova.scheduler.client.report [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.816 245711 DEBUG nova.compute.resource_tracker [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:30:04 np0005592157 nova_compute[245707]: 2026-01-22 14:30:04.816 245711 DEBUG oslo_concurrency.lockutils [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:04 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:05.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:05 np0005592157 nova_compute[245707]: 2026-01-22 14:30:05.812 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:07 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:07.244 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:30:07 np0005592157 nova_compute[245707]: 2026-01-22 14:30:07.244 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:07 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:07.246 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:30:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:08 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:08 np0005592157 nova_compute[245707]: 2026-01-22 14:30:08.157 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:10 np0005592157 nova_compute[245707]: 2026-01-22 14:30:10.862 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:11 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:30:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:30:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:13 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:13 np0005592157 nova_compute[245707]: 2026-01-22 14:30:13.159 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:15 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:15 np0005592157 nova_compute[245707]: 2026-01-22 14:30:15.863 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:16.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:17 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:17 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:17.249 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:30:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:17.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:18 np0005592157 nova_compute[245707]: 2026-01-22 14:30:18.228 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:30:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:30:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:18.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:19 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:19.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:20 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:20 np0005592157 nova_compute[245707]: 2026-01-22 14:30:20.903 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:21 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:21.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:22 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:23 np0005592157 nova_compute[245707]: 2026-01-22 14:30:23.230 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:23 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:23 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:24 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:25 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:25 np0005592157 nova_compute[245707]: 2026-01-22 14:30:25.956 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:28 np0005592157 nova_compute[245707]: 2026-01-22 14:30:28.268 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:28 np0005592157 podman[289101]: 2026-01-22 14:30:28.368538937 +0000 UTC m=+0.060345611 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:30:28 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:28 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:29 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:29.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:30 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:30 np0005592157 nova_compute[245707]: 2026-01-22 14:30:30.990 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:31 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:31.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:32 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:33 np0005592157 nova_compute[245707]: 2026-01-22 14:30:33.270 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:33.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2eca6bc2-4039-4b67-a2e2-028248dfc94c does not exist
Jan 22 09:30:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0ff17335-ed11-4d5e-89de-9ce8cd08f6dd does not exist
Jan 22 09:30:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e04ceb70-6195-4e22-942b-f54bac538fc3 does not exist
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:30:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:30:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:34 np0005592157 podman[289281]: 2026-01-22 14:30:34.175520429 +0000 UTC m=+0.085184468 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.671351947 +0000 UTC m=+0.055078160 container create cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.643009912 +0000 UTC m=+0.026736145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:34 np0005592157 systemd[1]: Started libpod-conmon-cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc.scope.
Jan 22 09:30:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.776574533 +0000 UTC m=+0.160300766 container init cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.787059224 +0000 UTC m=+0.170785447 container start cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.790996972 +0000 UTC m=+0.174723175 container attach cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:30:34 np0005592157 stupefied_turing[289437]: 167 167
Jan 22 09:30:34 np0005592157 systemd[1]: libpod-cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc.scope: Deactivated successfully.
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.795821842 +0000 UTC m=+0.179548055 container died cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:30:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-45ab0a55cafab99a02512411474bc3ef152a0cc15560e91c2065f5160ea463de-merged.mount: Deactivated successfully.
Jan 22 09:30:34 np0005592157 podman[289421]: 2026-01-22 14:30:34.845277001 +0000 UTC m=+0.229003194 container remove cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_turing, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:30:34 np0005592157 systemd[1]: libpod-conmon-cabfa31095964d1ba85f42301130fef107336447e9dfca6a8b25fd075d41bbbc.scope: Deactivated successfully.
Jan 22 09:30:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:34.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:35 np0005592157 podman[289459]: 2026-01-22 14:30:35.022449096 +0000 UTC m=+0.043552854 container create 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:30:35 np0005592157 systemd[1]: Started libpod-conmon-5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2.scope.
Jan 22 09:30:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:35 np0005592157 podman[289459]: 2026-01-22 14:30:35.00570884 +0000 UTC m=+0.026812578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:35 np0005592157 podman[289459]: 2026-01-22 14:30:35.123207511 +0000 UTC m=+0.144311279 container init 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:30:35 np0005592157 podman[289459]: 2026-01-22 14:30:35.136411889 +0000 UTC m=+0.157515607 container start 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:30:35 np0005592157 podman[289459]: 2026-01-22 14:30:35.140495311 +0000 UTC m=+0.161599039 container attach 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:30:35 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:35 np0005592157 nova_compute[245707]: 2026-01-22 14:30:35.991 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:35 np0005592157 flamboyant_cartwright[289475]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:30:35 np0005592157 flamboyant_cartwright[289475]: --> relative data size: 1.0
Jan 22 09:30:35 np0005592157 flamboyant_cartwright[289475]: --> All data devices are unavailable
Jan 22 09:30:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:36 np0005592157 systemd[1]: libpod-5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2.scope: Deactivated successfully.
Jan 22 09:30:36 np0005592157 podman[289459]: 2026-01-22 14:30:36.031452862 +0000 UTC m=+1.052556600 container died 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:30:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-88e787295775fdb71473f73f4ccefb76be4b90d859100d9f5820328b992c58b2-merged.mount: Deactivated successfully.
Jan 22 09:30:36 np0005592157 podman[289459]: 2026-01-22 14:30:36.080007959 +0000 UTC m=+1.101111677 container remove 5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:30:36 np0005592157 systemd[1]: libpod-conmon-5e9832672df55eceb5f196ca080d694265c24619edb1ee40773006b1d5aa61d2.scope: Deactivated successfully.
Jan 22 09:30:36 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.727712422 +0000 UTC m=+0.050627680 container create 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:30:36 np0005592157 systemd[1]: Started libpod-conmon-3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583.scope.
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.701284615 +0000 UTC m=+0.024199953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.829370649 +0000 UTC m=+0.152285997 container init 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.841179193 +0000 UTC m=+0.164094491 container start 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.845406878 +0000 UTC m=+0.168322226 container attach 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:30:36 np0005592157 tender_brattain[289663]: 167 167
Jan 22 09:30:36 np0005592157 systemd[1]: libpod-3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583.scope: Deactivated successfully.
Jan 22 09:30:36 np0005592157 conmon[289663]: conmon 3ec4c726f3b38f595d47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583.scope/container/memory.events
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.849672334 +0000 UTC m=+0.172587612 container died 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:30:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-876bd0d7051b3a24b5f2a10759e86ad26c17736928551356911bb624ba85edca-merged.mount: Deactivated successfully.
Jan 22 09:30:36 np0005592157 podman[289647]: 2026-01-22 14:30:36.897588145 +0000 UTC m=+0.220503443 container remove 3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:30:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:36.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:36 np0005592157 systemd[1]: libpod-conmon-3ec4c726f3b38f595d4782c37c1444f5d7753f6612528f35b49861f22e11f583.scope: Deactivated successfully.
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.086237016 +0000 UTC m=+0.047963324 container create c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:30:37 np0005592157 systemd[1]: Started libpod-conmon-c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607.scope.
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.061267395 +0000 UTC m=+0.022993723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df101e3dd06b448431d4f444195f1e66a0cb600dc3be33bd209a22a8bce47dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df101e3dd06b448431d4f444195f1e66a0cb600dc3be33bd209a22a8bce47dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df101e3dd06b448431d4f444195f1e66a0cb600dc3be33bd209a22a8bce47dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df101e3dd06b448431d4f444195f1e66a0cb600dc3be33bd209a22a8bce47dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.188599561 +0000 UTC m=+0.150325849 container init c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.200692441 +0000 UTC m=+0.162418719 container start c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.204283681 +0000 UTC m=+0.166009959 container attach c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:30:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:37 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:37 np0005592157 adoring_moser[289704]: {
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:    "0": [
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:        {
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "devices": [
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "/dev/loop3"
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            ],
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "lv_name": "ceph_lv0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "lv_size": "7511998464",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "name": "ceph_lv0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "tags": {
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.cluster_name": "ceph",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.crush_device_class": "",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.encrypted": "0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.osd_id": "0",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.type": "block",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:                "ceph.vdo": "0"
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            },
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "type": "block",
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:            "vg_name": "ceph_vg0"
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:        }
Jan 22 09:30:37 np0005592157 adoring_moser[289704]:    ]
Jan 22 09:30:37 np0005592157 adoring_moser[289704]: }
Jan 22 09:30:37 np0005592157 systemd[1]: libpod-c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607.scope: Deactivated successfully.
Jan 22 09:30:37 np0005592157 podman[289688]: 2026-01-22 14:30:37.960524882 +0000 UTC m=+0.922251160 container died c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:30:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8df101e3dd06b448431d4f444195f1e66a0cb600dc3be33bd209a22a8bce47dd-merged.mount: Deactivated successfully.
Jan 22 09:30:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:38 np0005592157 podman[289688]: 2026-01-22 14:30:38.026056172 +0000 UTC m=+0.987782490 container remove c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:30:38 np0005592157 systemd[1]: libpod-conmon-c09d43970ed934a56e61931fed9e75bbd65dd988b899f0af039abedc7b9c6607.scope: Deactivated successfully.
Jan 22 09:30:38 np0005592157 nova_compute[245707]: 2026-01-22 14:30:38.272 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:38 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.618530642 +0000 UTC m=+0.026217693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.732413294 +0000 UTC m=+0.140100325 container create d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:30:38 np0005592157 systemd[1]: Started libpod-conmon-d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec.scope.
Jan 22 09:30:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.898309298 +0000 UTC m=+0.305996379 container init d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:30:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.905030945 +0000 UTC m=+0.312718016 container start d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:30:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:38.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:38 np0005592157 blissful_grothendieck[289932]: 167 167
Jan 22 09:30:38 np0005592157 systemd[1]: libpod-d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec.scope: Deactivated successfully.
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.909385244 +0000 UTC m=+0.317072315 container attach d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.910149153 +0000 UTC m=+0.317836224 container died d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:30:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-56d5bdfc9cd12735a68aa91e1afa0fa2c7b0ee5fcaeb82123f4150c3a51aae79-merged.mount: Deactivated successfully.
Jan 22 09:30:38 np0005592157 podman[289870]: 2026-01-22 14:30:38.962233577 +0000 UTC m=+0.369920618 container remove d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:30:38 np0005592157 systemd[1]: libpod-conmon-d9f8f9061b770e0678851faea7d9487df9bfdeb04c94e70700e2d16fc2ed32ec.scope: Deactivated successfully.
Jan 22 09:30:39 np0005592157 podman[289958]: 2026-01-22 14:30:39.121072977 +0000 UTC m=+0.036115899 container create 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:30:39 np0005592157 systemd[1]: Started libpod-conmon-778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30.scope.
Jan 22 09:30:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:30:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c09adfd3946bf1d1de98c5b3c7bcec645efc9107b3cb59189feda123b9131a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c09adfd3946bf1d1de98c5b3c7bcec645efc9107b3cb59189feda123b9131a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c09adfd3946bf1d1de98c5b3c7bcec645efc9107b3cb59189feda123b9131a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c09adfd3946bf1d1de98c5b3c7bcec645efc9107b3cb59189feda123b9131a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:30:39 np0005592157 podman[289958]: 2026-01-22 14:30:39.197668211 +0000 UTC m=+0.112711163 container init 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:30:39 np0005592157 podman[289958]: 2026-01-22 14:30:39.105261503 +0000 UTC m=+0.020304445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:30:39 np0005592157 podman[289958]: 2026-01-22 14:30:39.210819728 +0000 UTC m=+0.125862650 container start 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:30:39 np0005592157 podman[289958]: 2026-01-22 14:30:39.214497569 +0000 UTC m=+0.129540531 container attach 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:30:39 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:39.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]: {
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:        "osd_id": 0,
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:        "type": "bluestore"
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]:    }
Jan 22 09:30:39 np0005592157 wizardly_lichterman[289974]: }
Jan 22 09:30:40 np0005592157 systemd[1]: libpod-778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30.scope: Deactivated successfully.
Jan 22 09:30:40 np0005592157 podman[289958]: 2026-01-22 14:30:40.016327174 +0000 UTC m=+0.931370106 container died 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:30:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-20c09adfd3946bf1d1de98c5b3c7bcec645efc9107b3cb59189feda123b9131a-merged.mount: Deactivated successfully.
Jan 22 09:30:40 np0005592157 podman[289958]: 2026-01-22 14:30:40.094304282 +0000 UTC m=+1.009347214 container remove 778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:30:40 np0005592157 systemd[1]: libpod-conmon-778026ea2ea6849cf9ce503da230ebd5de7fedc55b52b31abd7b999118247c30.scope: Deactivated successfully.
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6b92fb20-0c0b-48b0-a359-413597567799 does not exist
Jan 22 09:30:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 08030d46-4599-4d9f-ad11-0b295c934421 does not exist
Jan 22 09:30:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 96989b31-ec84-423a-b431-d88e0df36913 does not exist
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:30:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:40.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:30:40 np0005592157 nova_compute[245707]: 2026-01-22 14:30:40.993 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:41 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:30:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:41.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:30:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:42 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:42 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:42.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:43 np0005592157 nova_compute[245707]: 2026-01-22 14:30:43.274 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:43 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:44 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:45.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:45 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:45 np0005592157 nova_compute[245707]: 2026-01-22 14:30:45.996 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:30:46 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:30:47
Jan 22 09:30:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Jan 22 09:30:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:30:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:47.604 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:47.604 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:30:47.605 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:47 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:47 np0005592157 ceph-mon[74359]: Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:48 np0005592157 nova_compute[245707]: 2026-01-22 14:30:48.310 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:48 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:30:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:30:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:49 np0005592157 nova_compute[245707]: 2026-01-22 14:30:49.740 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:49 np0005592157 nova_compute[245707]: 2026-01-22 14:30:49.741 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:30:49 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:50 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:50 np0005592157 nova_compute[245707]: 2026-01-22 14:30:50.999 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:51 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:52 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:52.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:53 np0005592157 nova_compute[245707]: 2026-01-22 14:30:53.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:53 np0005592157 nova_compute[245707]: 2026-01-22 14:30:53.339 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:53 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592157 ceph-mon[74359]: Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:54 np0005592157 nova_compute[245707]: 2026-01-22 14:30:54.245 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:54 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:55 np0005592157 nova_compute[245707]: 2026-01-22 14:30:55.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:55 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.001 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.517 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "72bf44b1-1787-47c0-b0e6-a90b0a2115ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.518 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "72bf44b1-1787-47c0-b0e6-a90b0a2115ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.539 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.612 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.612 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.623 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.624 245711 INFO nova.compute.claims [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 09:30:56 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592157 nova_compute[245707]: 2026-01-22 14:30:56.883 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.243 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3697023630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.375 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.383 245711 DEBUG nova.compute.provider_tree [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed in ProviderTree for provider: 25bab4de-b201-44ab-9630-4373ed73bbb5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.402 245711 DEBUG nova.scheduler.client.report [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed for provider 25bab4de-b201-44ab-9630-4373ed73bbb5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.424 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.425 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:30:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.483 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.483 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.504 245711 INFO nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.522 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:30:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.632 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.634 245711 DEBUG nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.634 245711 INFO nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Creating image(s)#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.679 245711 DEBUG nova.storage.rbd_utils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.722 245711 DEBUG nova.storage.rbd_utils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.760 245711 DEBUG nova.storage.rbd_utils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.767 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.851 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.852 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.854 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.854 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:57 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.895 245711 DEBUG nova.storage.rbd_utils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:30:57 np0005592157 nova_compute[245707]: 2026-01-22 14:30:57.901 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.162 245711 DEBUG nova.policy [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '112b71a99add4ffeb28392e66d1a3d24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06252abc0be74ac08438db3d2f76db14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.340 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.443 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.555 245711 DEBUG nova.storage.rbd_utils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] resizing rbd image 72bf44b1-1787-47c0-b0e6-a90b0a2115ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.685 245711 DEBUG nova.objects.instance [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lazy-loading 'migration_context' on Instance uuid 72bf44b1-1787-47c0-b0e6-a90b0a2115ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.826 245711 DEBUG nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.826 245711 DEBUG nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Ensure instance console log exists: /var/lib/nova/instances/72bf44b1-1787-47c0-b0e6-a90b0a2115ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.827 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.827 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:58 np0005592157 nova_compute[245707]: 2026-01-22 14:30:58.827 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:58 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:58 np0005592157 podman[290283]: 2026-01-22 14:30:58.940973056 +0000 UTC m=+0.089786804 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:30:59 np0005592157 nova_compute[245707]: 2026-01-22 14:30:59.240 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:59 np0005592157 nova_compute[245707]: 2026-01-22 14:30:59.343 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Successfully created port: 79914743-57df-481b-b27c-678613053f13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:30:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:30:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:59 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 527 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 291 KiB/s wr, 12 op/s
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.274 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Successfully updated port: 79914743-57df-481b-b27c-678613053f13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.290 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "refresh_cache-72bf44b1-1787-47c0-b0e6-a90b0a2115ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.291 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquired lock "refresh_cache-72bf44b1-1787-47c0-b0e6-a90b0a2115ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.291 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.395 245711 DEBUG nova.compute.manager [req-1d2fdac8-0fc9-4ab0-800f-47e9549b5ed5 req-c9cc3f0f-d7a8-4e81-80a9-5b6046fb7514 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Received event network-changed-79914743-57df-481b-b27c-678613053f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.396 245711 DEBUG nova.compute.manager [req-1d2fdac8-0fc9-4ab0-800f-47e9549b5ed5 req-c9cc3f0f-d7a8-4e81-80a9-5b6046fb7514 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Refreshing instance network info cache due to event network-changed-79914743-57df-481b-b27c-678613053f13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.396 245711 DEBUG oslo_concurrency.lockutils [req-1d2fdac8-0fc9-4ab0-800f-47e9549b5ed5 req-c9cc3f0f-d7a8-4e81-80a9-5b6046fb7514 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-72bf44b1-1787-47c0-b0e6-a90b0a2115ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:31:00 np0005592157 nova_compute[245707]: 2026-01-22 14:31:00.498 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:31:00 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.003 245711 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.244 245711 DEBUG oslo_service.periodic_task [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.244 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.245 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.271 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 18becd7f-5901-49d8-87eb-548e630001aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 1089392f-9bda-4904-9370-95fc2c3dd7c2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: b8bec212-84ad-47fd-9608-2cc1999da6c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 7c13edec-8f8c-4e2d-8dcb-4976f52f7fdc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: df283133-db55-4a7e-a651-12dd25bae88e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.272 245711 DEBUG nova.compute.manager [None req-b35d8c40-6269-4bf0-a5b9-d0d4466f5597 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.587 245711 DEBUG nova.network.neutron [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Updating instance_info_cache with network_info: [{"id": "79914743-57df-481b-b27c-678613053f13", "address": "fa:16:3e:44:88:7f", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79914743-57", "ovs_interfaceid": "79914743-57df-481b-b27c-678613053f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.609 245711 DEBUG oslo_concurrency.lockutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Releasing lock "refresh_cache-72bf44b1-1787-47c0-b0e6-a90b0a2115ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.610 245711 DEBUG nova.compute.manager [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Instance network_info: |[{"id": "79914743-57df-481b-b27c-678613053f13", "address": "fa:16:3e:44:88:7f", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79914743-57", "ovs_interfaceid": "79914743-57df-481b-b27c-678613053f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.611 245711 DEBUG oslo_concurrency.lockutils [req-1d2fdac8-0fc9-4ab0-800f-47e9549b5ed5 req-c9cc3f0f-d7a8-4e81-80a9-5b6046fb7514 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-72bf44b1-1787-47c0-b0e6-a90b0a2115ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.612 245711 DEBUG nova.network.neutron [req-1d2fdac8-0fc9-4ab0-800f-47e9549b5ed5 req-c9cc3f0f-d7a8-4e81-80a9-5b6046fb7514 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Refreshing network info cache for port 79914743-57df-481b-b27c-678613053f13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:31:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:01.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.619 245711 DEBUG nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 72bf44b1-1787-47c0-b0e6-a90b0a2115ff] Start _get_guest_xml network_info=[{"id": "79914743-57df-481b-b27c-678613053f13", "address": "fa:16:3e:44:88:7f", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79914743-57", "ovs_interfaceid": "79914743-57df-481b-b27c-678613053f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.625 245711 WARNING nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.630 245711 DEBUG nova.virt.libvirt.host [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.630 245711 DEBUG nova.virt.libvirt.host [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.641 245711 DEBUG nova.virt.libvirt.host [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.642 245711 DEBUG nova.virt.libvirt.host [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.644 245711 DEBUG nova.virt.libvirt.driver [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.645 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.645 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.646 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.646 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.646 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.647 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.647 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.648 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.648 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.648 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.649 245711 DEBUG nova.virt.hardware [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:31:01 np0005592157 nova_compute[245707]: 2026-01-22 14:31:01.654 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:31:01 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141403899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:31:02 np0005592157 nova_compute[245707]: 2026-01-22 14:31:02.114 245711 DEBUG oslo_concurrency.processutils [None req-0ba518c7-fe5b-4b26-b350-36c4ba9cd7d0 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592157 ceph-mon[74359]: Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:03.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:03 np0005592157 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 09:31:03 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011509915026521496 of space, bias 1.0, pg target 3.4529745079564487 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:31:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:31:04 np0005592157 podman[290370]: 2026-01-22 14:31:04.383744715 +0000 UTC m=+0.107620247 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:31:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:31:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:31:04 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:05.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:06 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 09:31:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:06.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:07 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:07.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:08 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:08 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 09:31:08 np0005592157 ovsdb-server[47217]: ovs|00005|reconnect|ERR|tcp:127.0.0.1:41390: no response to inactivity probe after 5.05 seconds, disconnecting
Jan 22 09:31:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:09 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 09:31:10 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:11 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:11.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.0 MiB/s wr, 30 op/s
Jan 22 09:31:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:12 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:13.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:13 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 09:31:14 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:15 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:16.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:17.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:17 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:18 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:18.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:19.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:19 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:20 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:20.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:21 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:22 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:24 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:24 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:25 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:26.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:28 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:28.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:29 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:29 np0005592157 ovn_controller[146940]: 2026-01-22T14:31:29Z|00067|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:31:29 np0005592157 podman[290460]: 2026-01-22 14:31:29.333210845 +0000 UTC m=+0.067581711 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:31:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:29.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:30 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:30.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:31 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:31:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:31.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:31:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:32 np0005592157 ceph-mon[74359]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:31:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:33 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:33 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:33.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:34 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:35 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:35 np0005592157 podman[290482]: 2026-01-22 14:31:35.403071063 +0000 UTC m=+0.126195669 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:31:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:35.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:36 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.289033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297289136, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2208, "num_deletes": 251, "total_data_size": 3120865, "memory_usage": 3183696, "flush_reason": "Manual Compaction"}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297314690, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 3057164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50301, "largest_seqno": 52508, "table_properties": {"data_size": 3048175, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23266, "raw_average_key_size": 21, "raw_value_size": 3028198, "raw_average_value_size": 2778, "num_data_blocks": 222, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092130, "oldest_key_time": 1769092130, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 25720 microseconds, and 6731 cpu microseconds.
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.314765) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 3057164 bytes OK
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.314789) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321528) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321544) EVENT_LOG_v1 {"time_micros": 1769092297321539, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 3111590, prev total WAL file size 3111590, number of live WAL files 2.
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.322560) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2985KB)], [110(9846KB)]
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297322633, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 13140054, "oldest_snapshot_seqno": -1}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 10187 keys, 11563818 bytes, temperature: kUnknown
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297409407, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 11563818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11504481, "index_size": 32793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 273320, "raw_average_key_size": 26, "raw_value_size": 11327588, "raw_average_value_size": 1111, "num_data_blocks": 1246, "num_entries": 10187, "num_filter_entries": 10187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.409732) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11563818 bytes
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.412961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.2 rd, 133.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 10702, records dropped: 515 output_compression: NoCompression
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.412983) EVENT_LOG_v1 {"time_micros": 1769092297412973, "job": 66, "event": "compaction_finished", "compaction_time_micros": 86893, "compaction_time_cpu_micros": 37361, "output_level": 6, "num_output_files": 1, "total_output_size": 11563818, "num_input_records": 10702, "num_output_records": 10187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297414077, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297416864, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.322402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.416994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.417000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.417004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.417007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:37.417010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:37.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:38 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:38 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:39 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:39.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:40 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 74ae52f3-104a-4c9f-a327-fe2d1ce7ba44 does not exist
Jan 22 09:31:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0b8c518b-8974-4969-af09-eb1b6f395e81 does not exist
Jan 22 09:31:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 624c36f9-5a9f-439c-9c88-594a4a8831b1 does not exist
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:31:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:31:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.018585467 +0000 UTC m=+0.044033006 container create 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:31:42 np0005592157 systemd[1]: Started libpod-conmon-4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c.scope.
Jan 22 09:31:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:41.998858936 +0000 UTC m=+0.024306495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.096580956 +0000 UTC m=+0.122028495 container init 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.104831331 +0000 UTC m=+0.130278870 container start 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:31:42 np0005592157 laughing_chatelet[290850]: 167 167
Jan 22 09:31:42 np0005592157 systemd[1]: libpod-4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c.scope: Deactivated successfully.
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.109805165 +0000 UTC m=+0.135252724 container attach 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.110521423 +0000 UTC m=+0.135968962 container died 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:31:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bdca693e7a65cc22c8a090c16ab8a1f40f6aa3acf4614e3a0401bc44ab297056-merged.mount: Deactivated successfully.
Jan 22 09:31:42 np0005592157 podman[290833]: 2026-01-22 14:31:42.149124852 +0000 UTC m=+0.174572401 container remove 4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:31:42 np0005592157 systemd[1]: libpod-conmon-4d9c336fe13373fd02cbcff579f53e141cfd07cab0b1ba37aa7f34d4ab9a8d5c.scope: Deactivated successfully.
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:31:42 np0005592157 podman[290873]: 2026-01-22 14:31:42.372431224 +0000 UTC m=+0.071079498 container create 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:31:42 np0005592157 systemd[1]: Started libpod-conmon-86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a.scope.
Jan 22 09:31:42 np0005592157 podman[290873]: 2026-01-22 14:31:42.342839649 +0000 UTC m=+0.041487973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:42 np0005592157 podman[290873]: 2026-01-22 14:31:42.456087994 +0000 UTC m=+0.154736258 container init 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.461402) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302461554, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 339, "num_deletes": 258, "total_data_size": 128335, "memory_usage": 136424, "flush_reason": "Manual Compaction"}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302465429, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 127264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52509, "largest_seqno": 52847, "table_properties": {"data_size": 125158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5510, "raw_average_key_size": 18, "raw_value_size": 120743, "raw_average_value_size": 397, "num_data_blocks": 12, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092298, "oldest_key_time": 1769092298, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 3995 microseconds, and 940 cpu microseconds.
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592157 podman[290873]: 2026-01-22 14:31:42.466489563 +0000 UTC m=+0.165137797 container start 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.465468) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 127264 bytes OK
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.465481) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.466869) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.466882) EVENT_LOG_v1 {"time_micros": 1769092302466878, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.466898) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 125956, prev total WAL file size 125956, number of live WAL files 2.
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(124KB)], [113(11MB)]
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302467338, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11691082, "oldest_snapshot_seqno": -1}
Jan 22 09:31:42 np0005592157 podman[290873]: 2026-01-22 14:31:42.4703748 +0000 UTC m=+0.169023084 container attach 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 9964 keys, 11552074 bytes, temperature: kUnknown
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302535457, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 11552074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493716, "index_size": 32326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 269691, "raw_average_key_size": 27, "raw_value_size": 11320156, "raw_average_value_size": 1136, "num_data_blocks": 1223, "num_entries": 9964, "num_filter_entries": 9964, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.535742) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11552074 bytes
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.537432) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.2 rd, 169.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(182.6) write-amplify(90.8) OK, records in: 10491, records dropped: 527 output_compression: NoCompression
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.537449) EVENT_LOG_v1 {"time_micros": 1769092302537441, "job": 68, "event": "compaction_finished", "compaction_time_micros": 68274, "compaction_time_cpu_micros": 27185, "output_level": 6, "num_output_files": 1, "total_output_size": 11552074, "num_input_records": 10491, "num_output_records": 9964, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302537806, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302540156, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.540209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.540211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.540213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.540214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:31:42.540216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:43.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:43 np0005592157 gifted_gould[290889]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:31:43 np0005592157 gifted_gould[290889]: --> relative data size: 1.0
Jan 22 09:31:43 np0005592157 gifted_gould[290889]: --> All data devices are unavailable
Jan 22 09:31:43 np0005592157 systemd[1]: libpod-86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a.scope: Deactivated successfully.
Jan 22 09:31:43 np0005592157 podman[290873]: 2026-01-22 14:31:43.231877392 +0000 UTC m=+0.930525626 container died 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:31:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-81e7b1c2d4a8b48c107a533e5576f91be908093b63a909d01283832f8e9839ad-merged.mount: Deactivated successfully.
Jan 22 09:31:43 np0005592157 podman[290873]: 2026-01-22 14:31:43.288722076 +0000 UTC m=+0.987370310 container remove 86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:31:43 np0005592157 systemd[1]: libpod-conmon-86777bfa2ddd9a590e432ea65c6620ce608c447c2bfc2600b1bf70648495382a.scope: Deactivated successfully.
Jan 22 09:31:43 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:43 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.878402926 +0000 UTC m=+0.048762103 container create 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:31:43 np0005592157 systemd[1]: Started libpod-conmon-26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1.scope.
Jan 22 09:31:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.936769157 +0000 UTC m=+0.107128315 container init 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.942585812 +0000 UTC m=+0.112944949 container start 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:31:43 np0005592157 cool_torvalds[291075]: 167 167
Jan 22 09:31:43 np0005592157 systemd[1]: libpod-26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1.scope: Deactivated successfully.
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.946543511 +0000 UTC m=+0.116902668 container attach 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.947406252 +0000 UTC m=+0.117765389 container died 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.862798018 +0000 UTC m=+0.033157175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-03e35e519d84281994aae25544ba8349ed5488b6f41334ecebb060706c52af35-merged.mount: Deactivated successfully.
Jan 22 09:31:43 np0005592157 podman[291059]: 2026-01-22 14:31:43.998946493 +0000 UTC m=+0.169305630 container remove 26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:31:44 np0005592157 systemd[1]: libpod-conmon-26ca01101efa75e261242c7922cc6dc14be39bbd1091d47c96e820d584ac96e1.scope: Deactivated successfully.
Jan 22 09:31:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:44 np0005592157 podman[291099]: 2026-01-22 14:31:44.189252614 +0000 UTC m=+0.067808876 container create 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:31:44 np0005592157 systemd[1]: Started libpod-conmon-04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571.scope.
Jan 22 09:31:44 np0005592157 podman[291099]: 2026-01-22 14:31:44.159070323 +0000 UTC m=+0.037626665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6827891f22229ce2350d23dfcb86f34a7426fcb5cfd4b43ee0355e6073ee9d15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6827891f22229ce2350d23dfcb86f34a7426fcb5cfd4b43ee0355e6073ee9d15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6827891f22229ce2350d23dfcb86f34a7426fcb5cfd4b43ee0355e6073ee9d15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6827891f22229ce2350d23dfcb86f34a7426fcb5cfd4b43ee0355e6073ee9d15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:44 np0005592157 podman[291099]: 2026-01-22 14:31:44.278077642 +0000 UTC m=+0.156633944 container init 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:31:44 np0005592157 podman[291099]: 2026-01-22 14:31:44.294339867 +0000 UTC m=+0.172896119 container start 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:31:44 np0005592157 podman[291099]: 2026-01-22 14:31:44.298545121 +0000 UTC m=+0.177101423 container attach 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:31:44 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:45 np0005592157 competent_ride[291116]: {
Jan 22 09:31:45 np0005592157 competent_ride[291116]:    "0": [
Jan 22 09:31:45 np0005592157 competent_ride[291116]:        {
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "devices": [
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "/dev/loop3"
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            ],
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "lv_name": "ceph_lv0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "lv_size": "7511998464",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "name": "ceph_lv0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "tags": {
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.cluster_name": "ceph",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.crush_device_class": "",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.encrypted": "0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.osd_id": "0",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.type": "block",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:                "ceph.vdo": "0"
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            },
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "type": "block",
Jan 22 09:31:45 np0005592157 competent_ride[291116]:            "vg_name": "ceph_vg0"
Jan 22 09:31:45 np0005592157 competent_ride[291116]:        }
Jan 22 09:31:45 np0005592157 competent_ride[291116]:    ]
Jan 22 09:31:45 np0005592157 competent_ride[291116]: }
Jan 22 09:31:45 np0005592157 systemd[1]: libpod-04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571.scope: Deactivated successfully.
Jan 22 09:31:45 np0005592157 podman[291099]: 2026-01-22 14:31:45.113738369 +0000 UTC m=+0.992294621 container died 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:31:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6827891f22229ce2350d23dfcb86f34a7426fcb5cfd4b43ee0355e6073ee9d15-merged.mount: Deactivated successfully.
Jan 22 09:31:45 np0005592157 podman[291099]: 2026-01-22 14:31:45.170632813 +0000 UTC m=+1.049189065 container remove 04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:31:45 np0005592157 systemd[1]: libpod-conmon-04065c7647400c1ecc8466a7c35e2548d0fc7cbbf9753a8f8e86b734953ee571.scope: Deactivated successfully.
Jan 22 09:31:45 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:45.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.774339493 +0000 UTC m=+0.041017041 container create 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:31:45 np0005592157 systemd[1]: Started libpod-conmon-06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf.scope.
Jan 22 09:31:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.759176176 +0000 UTC m=+0.025853744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.859386597 +0000 UTC m=+0.126064165 container init 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.864793282 +0000 UTC m=+0.131470840 container start 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.867677643 +0000 UTC m=+0.134355191 container attach 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:31:45 np0005592157 dreamy_lovelace[291297]: 167 167
Jan 22 09:31:45 np0005592157 systemd[1]: libpod-06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf.scope: Deactivated successfully.
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.870094173 +0000 UTC m=+0.136771751 container died 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:31:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fe0267c96d92456c026797c845b048ab40829b2738b3444f1db6b30dc65b7adb-merged.mount: Deactivated successfully.
Jan 22 09:31:45 np0005592157 podman[291280]: 2026-01-22 14:31:45.90617266 +0000 UTC m=+0.172850208 container remove 06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:31:45 np0005592157 systemd[1]: libpod-conmon-06451e7ff0da1923e8aae655df2b91c5df944af90de5743cb1d46801b31208cf.scope: Deactivated successfully.
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:46 np0005592157 podman[291320]: 2026-01-22 14:31:46.066494046 +0000 UTC m=+0.044072556 container create 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:31:46 np0005592157 systemd[1]: Started libpod-conmon-283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1.scope.
Jan 22 09:31:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:31:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b81210f57c3a61cce8110ae69db57d7e82ecca233d984b380c768228b225de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b81210f57c3a61cce8110ae69db57d7e82ecca233d984b380c768228b225de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b81210f57c3a61cce8110ae69db57d7e82ecca233d984b380c768228b225de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b81210f57c3a61cce8110ae69db57d7e82ecca233d984b380c768228b225de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:31:46 np0005592157 podman[291320]: 2026-01-22 14:31:46.045754371 +0000 UTC m=+0.023332901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:31:46 np0005592157 podman[291320]: 2026-01-22 14:31:46.153982811 +0000 UTC m=+0.131561321 container init 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:31:46 np0005592157 podman[291320]: 2026-01-22 14:31:46.160472943 +0000 UTC m=+0.138051493 container start 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:31:46 np0005592157 podman[291320]: 2026-01-22 14:31:46.164747599 +0000 UTC m=+0.142326109 container attach 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:31:46 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:31:46 np0005592157 modest_hoover[291336]: {
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:        "osd_id": 0,
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:        "type": "bluestore"
Jan 22 09:31:46 np0005592157 modest_hoover[291336]:    }
Jan 22 09:31:46 np0005592157 modest_hoover[291336]: }
Jan 22 09:31:47 np0005592157 systemd[1]: libpod-283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1.scope: Deactivated successfully.
Jan 22 09:31:47 np0005592157 podman[291320]: 2026-01-22 14:31:47.001173665 +0000 UTC m=+0.978752165 container died 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:31:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4b81210f57c3a61cce8110ae69db57d7e82ecca233d984b380c768228b225de5-merged.mount: Deactivated successfully.
Jan 22 09:31:47 np0005592157 podman[291320]: 2026-01-22 14:31:47.056347376 +0000 UTC m=+1.033925876 container remove 283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:31:47 np0005592157 systemd[1]: libpod-conmon-283bd1932d21e05032373501fb4e5107f11926e36eabc26de8620890265a67a1.scope: Deactivated successfully.
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ce6a6ad6-cbe4-49ee-b386-adc73d1a05d7 does not exist
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a72ff97f-a289-4306-abd8-e3252c21f501 does not exist
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5aa016a3-63bd-4a3c-aa33-a88f33ecf14b does not exist
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:31:47
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'images', 'default.rgw.meta']
Jan 22 09:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:31:47.605 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:31:47.606 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:31:47.606 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:47.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:48 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:48 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:49 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:49.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:31:50 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:51 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:51.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:31:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:52 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:53 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:53 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:31:54 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:55 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:55.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:31:56 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 11K writes, 52K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1938 writes, 9118 keys, 1938 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s#012Interval WAL: 1938 writes, 1938 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     67.0      0.92              0.27        34    0.027       0      0       0.0       0.0#012  L6      1/0   11.02 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8    110.5     94.3      3.14              1.13        33    0.095    250K    18K       0.0       0.0#012 Sum      1/0   11.02 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.8     85.4     88.1      4.06              1.39        67    0.061    250K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     77.7     79.1      1.16              0.28        16    0.072     79K   4122       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    110.5     94.3      3.14              1.13        33    0.095    250K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     67.2      0.92              0.27        33    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.061, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.35 GB write, 0.10 MB/s write, 0.34 GB read, 0.10 MB/s read, 4.1 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 38.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000264 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2056,36.76 MB,12.0912%) FilterBlock(68,703.98 KB,0.226146%) IndexBlock(68,983.98 KB,0.316093%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:31:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:57 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:57.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:31:58 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:59 np0005592157 podman[291451]: 2026-01-22 14:31:59.535663657 +0000 UTC m=+0.089402594 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:31:59 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:31:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:59.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:32:00 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:01 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:01.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 09:32:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:02 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:02 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:03 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:03.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012152414270426204 of space, bias 1.0, pg target 3.6457242811278614 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:32:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:32:04 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:05 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:05.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:06 np0005592157 podman[291500]: 2026-01-22 14:32:06.35099507 +0000 UTC m=+0.080723368 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 09:32:06 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:07.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:07 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:07 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:08 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:09.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:09.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:09 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:10 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:11.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:11 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:12 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:13.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:32:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:32:13 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:13 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:14 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:15.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:15 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:16 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:17.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:17 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:18 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:19.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:19 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:20 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:21.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:21.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:22 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:23 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:23 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:23.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:24 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:25 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:25.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:26 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:27 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:27.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:28 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:28 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:29 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:32:30 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:30 np0005592157 podman[291588]: 2026-01-22 14:32:30.882198496 +0000 UTC m=+0.615178344 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 09:32:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:31 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 09:32:32 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:33 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:33 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 09:32:34 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:35.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:35 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:35.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:32:36 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:37.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:37 np0005592157 podman[291611]: 2026-01-22 14:32:37.382489417 +0000 UTC m=+0.101850863 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:32:37 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:37.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:32:38 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:38 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:39.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:39 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:39.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:32:40 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:41 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 09:32:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:42 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:43.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:43 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:43 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 76 op/s
Jan 22 09:32:44 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:45.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:45 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 95 op/s
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:32:46 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:32:47
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 22 09:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:32:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:32:47.607 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:32:47.607 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:32:47.607 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:47.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:47 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 254a5576-a70b-4ce0-ba77-c1d588802a36 does not exist
Jan 22 09:32:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aad7038f-fc7b-48fe-abb4-ce8479a7780b does not exist
Jan 22 09:32:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9a016e5b-b0cb-49d9-8fd7-ee297bf4e723 does not exist
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:32:48 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:49.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.152645466 +0000 UTC m=+0.049546313 container create 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:32:49 np0005592157 systemd[1]: Started libpod-conmon-95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c.scope.
Jan 22 09:32:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.130060074 +0000 UTC m=+0.026960991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.233870805 +0000 UTC m=+0.130771652 container init 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.242306925 +0000 UTC m=+0.139207772 container start 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.246349926 +0000 UTC m=+0.143250773 container attach 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:32:49 np0005592157 flamboyant_williamson[291983]: 167 167
Jan 22 09:32:49 np0005592157 systemd[1]: libpod-95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c.scope: Deactivated successfully.
Jan 22 09:32:49 np0005592157 conmon[291983]: conmon 95edfec0b1bd57c8e55c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c.scope/container/memory.events
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.250755715 +0000 UTC m=+0.147656562 container died 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:32:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-92c7d6765ee06fa16813d90275f4d4967a09873d7c7e58ecce904a7ccf4f173d-merged.mount: Deactivated successfully.
Jan 22 09:32:49 np0005592157 podman[291967]: 2026-01-22 14:32:49.301751873 +0000 UTC m=+0.198652760 container remove 95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:32:49 np0005592157 systemd[1]: libpod-conmon-95edfec0b1bd57c8e55ca0d11ef1fde02faef92c8171083e14970a45c29f8c8c.scope: Deactivated successfully.
Jan 22 09:32:49 np0005592157 podman[292005]: 2026-01-22 14:32:49.484180719 +0000 UTC m=+0.054813514 container create b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:32:49 np0005592157 systemd[1]: Started libpod-conmon-b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab.scope.
Jan 22 09:32:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:49 np0005592157 podman[292005]: 2026-01-22 14:32:49.464883949 +0000 UTC m=+0.035516764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:49 np0005592157 podman[292005]: 2026-01-22 14:32:49.574445523 +0000 UTC m=+0.145078338 container init b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:32:49 np0005592157 podman[292005]: 2026-01-22 14:32:49.582146684 +0000 UTC m=+0.152779479 container start b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:32:49 np0005592157 podman[292005]: 2026-01-22 14:32:49.586161304 +0000 UTC m=+0.156794159 container attach b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:32:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:49.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:49 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 675 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 22 09:32:50 np0005592157 competent_booth[292022]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:32:50 np0005592157 competent_booth[292022]: --> relative data size: 1.0
Jan 22 09:32:50 np0005592157 competent_booth[292022]: --> All data devices are unavailable
Jan 22 09:32:50 np0005592157 systemd[1]: libpod-b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab.scope: Deactivated successfully.
Jan 22 09:32:50 np0005592157 podman[292005]: 2026-01-22 14:32:50.474417068 +0000 UTC m=+1.045049873 container died b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:32:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1cc222b0aa8046dc2145a3642bc1ef614c570a5d3dd60573cd55185126410d81-merged.mount: Deactivated successfully.
Jan 22 09:32:50 np0005592157 podman[292005]: 2026-01-22 14:32:50.551918445 +0000 UTC m=+1.122551240 container remove b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_booth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:32:50 np0005592157 systemd[1]: libpod-conmon-b9e0217cabc39d22a7b3ed902d005389e5ffced2fb66631bbadf0774fc87f9ab.scope: Deactivated successfully.
Jan 22 09:32:50 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.1232652 +0000 UTC m=+0.043170474 container create 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:32:51 np0005592157 systemd[1]: Started libpod-conmon-5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98.scope.
Jan 22 09:32:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.104206216 +0000 UTC m=+0.024111470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.214473098 +0000 UTC m=+0.134378352 container init 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.221376499 +0000 UTC m=+0.141281733 container start 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.225117452 +0000 UTC m=+0.145022686 container attach 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:32:51 np0005592157 determined_wiles[292206]: 167 167
Jan 22 09:32:51 np0005592157 systemd[1]: libpod-5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98.scope: Deactivated successfully.
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.226664501 +0000 UTC m=+0.146569735 container died 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:32:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-02d4142536101b62cbe2c0afc88a058a87e9f48d40045f9883968df9feb39c3c-merged.mount: Deactivated successfully.
Jan 22 09:32:51 np0005592157 podman[292190]: 2026-01-22 14:32:51.269174858 +0000 UTC m=+0.189080092 container remove 5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:32:51 np0005592157 systemd[1]: libpod-conmon-5cab34d7d21dcf971c4a10790e44138473498d26d89c9a23422081b6fa081e98.scope: Deactivated successfully.
Jan 22 09:32:51 np0005592157 podman[292231]: 2026-01-22 14:32:51.43577914 +0000 UTC m=+0.044115578 container create 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:32:51 np0005592157 systemd[1]: Started libpod-conmon-314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362.scope.
Jan 22 09:32:51 np0005592157 podman[292231]: 2026-01-22 14:32:51.417463314 +0000 UTC m=+0.025799752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b196874950c2eb9a4a8ba1ec0e7bd35e66cd6702d72e8544be84365c3cc5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b196874950c2eb9a4a8ba1ec0e7bd35e66cd6702d72e8544be84365c3cc5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b196874950c2eb9a4a8ba1ec0e7bd35e66cd6702d72e8544be84365c3cc5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b196874950c2eb9a4a8ba1ec0e7bd35e66cd6702d72e8544be84365c3cc5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:51 np0005592157 podman[292231]: 2026-01-22 14:32:51.53632861 +0000 UTC m=+0.144665118 container init 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:32:51 np0005592157 podman[292231]: 2026-01-22 14:32:51.551553088 +0000 UTC m=+0.159889546 container start 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:32:51 np0005592157 podman[292231]: 2026-01-22 14:32:51.555433255 +0000 UTC m=+0.163769703 container attach 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:32:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:51.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:51 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]: {
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:    "0": [
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:        {
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "devices": [
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "/dev/loop3"
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            ],
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "lv_name": "ceph_lv0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "lv_size": "7511998464",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "name": "ceph_lv0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "tags": {
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.cluster_name": "ceph",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.crush_device_class": "",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.encrypted": "0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.osd_id": "0",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.type": "block",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:                "ceph.vdo": "0"
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            },
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "type": "block",
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:            "vg_name": "ceph_vg0"
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:        }
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]:    ]
Jan 22 09:32:52 np0005592157 eloquent_galileo[292248]: }
Jan 22 09:32:52 np0005592157 systemd[1]: libpod-314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362.scope: Deactivated successfully.
Jan 22 09:32:52 np0005592157 podman[292231]: 2026-01-22 14:32:52.356843439 +0000 UTC m=+0.965179877 container died 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:32:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-73b196874950c2eb9a4a8ba1ec0e7bd35e66cd6702d72e8544be84365c3cc5d6-merged.mount: Deactivated successfully.
Jan 22 09:32:52 np0005592157 podman[292231]: 2026-01-22 14:32:52.415248741 +0000 UTC m=+1.023585169 container remove 314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:32:52 np0005592157 systemd[1]: libpod-conmon-314c4f9826cc1469149c3617e45b97bd5e005e4a2986128b7f797b07415a7362.scope: Deactivated successfully.
Jan 22 09:32:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:52 np0005592157 podman[292412]: 2026-01-22 14:32:52.954262882 +0000 UTC m=+0.034320004 container create ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:32:52 np0005592157 systemd[1]: Started libpod-conmon-ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d.scope.
Jan 22 09:32:52 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:52 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:53.024248052 +0000 UTC m=+0.104305194 container init ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:53.029752229 +0000 UTC m=+0.109809351 container start ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:32:53 np0005592157 brave_galileo[292428]: 167 167
Jan 22 09:32:53 np0005592157 systemd[1]: libpod-ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d.scope: Deactivated successfully.
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:53.033662416 +0000 UTC m=+0.113719538 container attach ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:53.034598429 +0000 UTC m=+0.114655551 container died ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:52.939642738 +0000 UTC m=+0.019699880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-278208a7a1d2fc475a9611511432ede72132e17818e88f61bc131e340a3e5fe3-merged.mount: Deactivated successfully.
Jan 22 09:32:53 np0005592157 podman[292412]: 2026-01-22 14:32:53.069081647 +0000 UTC m=+0.149138769 container remove ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:32:53 np0005592157 systemd[1]: libpod-conmon-ff5e05624d8021ee56a5993c266457d96c9e2ea2d9e4d05d02b7d060dc3ba49d.scope: Deactivated successfully.
Jan 22 09:32:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:53 np0005592157 podman[292451]: 2026-01-22 14:32:53.206655447 +0000 UTC m=+0.035639317 container create 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:32:53 np0005592157 systemd[1]: Started libpod-conmon-4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842.scope.
Jan 22 09:32:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4332d17c64da479302447c94ca468ed09b1e1609f2b131b8290519acb486ea57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4332d17c64da479302447c94ca468ed09b1e1609f2b131b8290519acb486ea57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4332d17c64da479302447c94ca468ed09b1e1609f2b131b8290519acb486ea57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4332d17c64da479302447c94ca468ed09b1e1609f2b131b8290519acb486ea57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:32:53 np0005592157 podman[292451]: 2026-01-22 14:32:53.273208812 +0000 UTC m=+0.102192692 container init 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:32:53 np0005592157 podman[292451]: 2026-01-22 14:32:53.281657302 +0000 UTC m=+0.110641172 container start 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:32:53 np0005592157 podman[292451]: 2026-01-22 14:32:53.284434701 +0000 UTC m=+0.113418571 container attach 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:32:53 np0005592157 podman[292451]: 2026-01-22 14:32:53.191129741 +0000 UTC m=+0.020113641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:32:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:53.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]: {
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:        "osd_id": 0,
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:        "type": "bluestore"
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]:    }
Jan 22 09:32:54 np0005592157 pedantic_jang[292467]: }
Jan 22 09:32:54 np0005592157 systemd[1]: libpod-4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842.scope: Deactivated successfully.
Jan 22 09:32:54 np0005592157 podman[292451]: 2026-01-22 14:32:54.063631383 +0000 UTC m=+0.892615253 container died 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:32:54 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4332d17c64da479302447c94ca468ed09b1e1609f2b131b8290519acb486ea57-merged.mount: Deactivated successfully.
Jan 22 09:32:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 09:32:54 np0005592157 podman[292451]: 2026-01-22 14:32:54.110114299 +0000 UTC m=+0.939098169 container remove 4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:32:54 np0005592157 systemd[1]: libpod-conmon-4bb820ff892b4a7d246bcb5c211254c7d65aa4b4000985b1a8fdc2db91b62842.scope: Deactivated successfully.
Jan 22 09:32:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:32:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:32:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3c77924d-8daf-4c19-a3d5-e8ba6c1e0734 does not exist
Jan 22 09:32:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ed64a8d7-95e9-4ee7-8b61-3d11708c6ac8 does not exist
Jan 22 09:32:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 693edead-5536-4fa9-b97b-79d6a4d243d0 does not exist
Jan 22 09:32:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 09:32:56 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:57.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.480886) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377480978, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1149, "num_deletes": 252, "total_data_size": 1434775, "memory_usage": 1462424, "flush_reason": "Manual Compaction"}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377492069, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 920251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52848, "largest_seqno": 53996, "table_properties": {"data_size": 916035, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13120, "raw_average_key_size": 21, "raw_value_size": 906101, "raw_average_value_size": 1487, "num_data_blocks": 70, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092303, "oldest_key_time": 1769092303, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 11222 microseconds, and 6457 cpu microseconds.
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492113) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 920251 bytes OK
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492135) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494133) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494147) EVENT_LOG_v1 {"time_micros": 1769092377494143, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1429444, prev total WAL file size 1429444, number of live WAL files 2.
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494906) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373538' seq:0, type:0; will stop at (end)
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(898KB)], [116(11MB)]
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377494954, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12472325, "oldest_snapshot_seqno": -1}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 10087 keys, 9037650 bytes, temperature: kUnknown
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377552201, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 9037650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8982625, "index_size": 28673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 272923, "raw_average_key_size": 27, "raw_value_size": 8811048, "raw_average_value_size": 873, "num_data_blocks": 1070, "num_entries": 10087, "num_filter_entries": 10087, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.552466) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9037650 bytes
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.553787) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.6 rd, 157.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.4) write-amplify(9.8) OK, records in: 10573, records dropped: 486 output_compression: NoCompression
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.553802) EVENT_LOG_v1 {"time_micros": 1769092377553795, "job": 70, "event": "compaction_finished", "compaction_time_micros": 57327, "compaction_time_cpu_micros": 22532, "output_level": 6, "num_output_files": 1, "total_output_size": 9037650, "num_input_records": 10573, "num_output_records": 10087, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377554108, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377556414, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.556487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.556493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.556495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.556496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:32:57.556497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 09:32:58 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:58 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:59 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:32:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:59.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 09:33:00 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:01 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:01 np0005592157 podman[292607]: 2026-01-22 14:33:01.363109922 +0000 UTC m=+0.077826496 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 09:33:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:01.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 56 KiB/s wr, 7 op/s
Jan 22 09:33:02 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:03 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:03 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:04 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014312193199795273 of space, bias 1.0, pg target 4.293657959938582 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010759817606551275 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:33:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:33:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:05 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 09:33:06 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:07.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:07 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 09:33:08 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:08 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:08 np0005592157 podman[292631]: 2026-01-22 14:33:08.408049472 +0000 UTC m=+0.144328879 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 09:33:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:09.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:09 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:09.437 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:33:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:09.438 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:33:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:09.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 09:33:10 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:11.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:11 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:11.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 09:33:12 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:13 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:13 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 09:33:14 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:33:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:33:15 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:15.442 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:33:15 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:17 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:17 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 09:33:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:33:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:33:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:33:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:33:18 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:19 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 09:33:20 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:21.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:21 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:22 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:23.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:23 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:23 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:24 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:25.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:25 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:26 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:33:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:27.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:33:27 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:28 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:29.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:29 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:30 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:31 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:32 np0005592157 podman[292719]: 2026-01-22 14:33:32.31073021 +0000 UTC m=+0.049337998 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:33:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:32 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:32 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:33:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:33.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:33:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:33.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:34 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:35 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:36 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:37 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:37.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:38 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:38 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:39 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:39 np0005592157 podman[292742]: 2026-01-22 14:33:39.363001647 +0000 UTC m=+0.095701860 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 22 09:33:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:39.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:40 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:41 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:42 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:43 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:43 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:43.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:44 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:45 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:46 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:33:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:47 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:33:47
Jan 22 09:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta']
Jan 22 09:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:33:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:47.608 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:47.609 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:33:47.609 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:48 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:48 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:49.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:49 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:50 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:51.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:51 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:33:52 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:53 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:53 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 340 B/s rd, 0 op/s
Jan 22 09:33:54 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:55 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:55 np0005592157 podman[293000]: 2026-01-22 14:33:55.678722657 +0000 UTC m=+0.095185148 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:33:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:55 np0005592157 podman[293000]: 2026-01-22 14:33:55.846907968 +0000 UTC m=+0.263370379 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:33:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:33:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:33:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 09:33:56 np0005592157 podman[293156]: 2026-01-22 14:33:56.740558265 +0000 UTC m=+0.066490804 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:33:56 np0005592157 podman[293156]: 2026-01-22 14:33:56.754210365 +0000 UTC m=+0.080142894 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:33:56 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592157 podman[293222]: 2026-01-22 14:33:57.075426231 +0000 UTC m=+0.083537258 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 09:33:57 np0005592157 podman[293222]: 2026-01-22 14:33:57.084391194 +0000 UTC m=+0.092502161 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:57.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:57.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 50e0b159-efef-40d2-acbb-240397463f74 does not exist
Jan 22 09:33:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 331d5c77-d183-4c3c-b9fa-0960b1897624 does not exist
Jan 22 09:33:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3e61d70b-b135-47e8-9392-dc3103de56a7 does not exist
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 22 09:33:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 22 09:33:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:59.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.52609613 +0000 UTC m=+0.056315361 container create a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:33:59 np0005592157 systemd[1]: Started libpod-conmon-a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f.scope.
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.494194237 +0000 UTC m=+0.024413478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:33:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.630227589 +0000 UTC m=+0.160446800 container init a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.642230348 +0000 UTC m=+0.172449579 container start a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.646774401 +0000 UTC m=+0.176993642 container attach a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:33:59 np0005592157 pensive_nobel[293667]: 167 167
Jan 22 09:33:59 np0005592157 systemd[1]: libpod-a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f.scope: Deactivated successfully.
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.652222236 +0000 UTC m=+0.182441437 container died a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:33:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-12fe46a743d95cd85a8bc097e22dabe0addc273be56200fda3d98e7dfcdfb1f9-merged.mount: Deactivated successfully.
Jan 22 09:33:59 np0005592157 podman[293650]: 2026-01-22 14:33:59.69824529 +0000 UTC m=+0.228464471 container remove a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 22 09:33:59 np0005592157 systemd[1]: libpod-conmon-a25ef9ec63fdba9cce46b1c3b0640e8950bf548db2b40259cde775a1012f6e5f.scope: Deactivated successfully.
Jan 22 09:33:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:33:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:33:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:33:59 np0005592157 podman[293691]: 2026-01-22 14:33:59.919213774 +0000 UTC m=+0.076294828 container create 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:33:59 np0005592157 systemd[1]: Started libpod-conmon-4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420.scope.
Jan 22 09:33:59 np0005592157 podman[293691]: 2026-01-22 14:33:59.889230539 +0000 UTC m=+0.046311603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:33:59 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:34:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:00 np0005592157 podman[293691]: 2026-01-22 14:34:00.01478553 +0000 UTC m=+0.171866614 container init 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:34:00 np0005592157 podman[293691]: 2026-01-22 14:34:00.022515822 +0000 UTC m=+0.179596826 container start 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:34:00 np0005592157 podman[293691]: 2026-01-22 14:34:00.025708592 +0000 UTC m=+0.182789646 container attach 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:34:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 608 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 819 KiB/s wr, 7 op/s
Jan 22 09:34:00 np0005592157 romantic_snyder[293708]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:34:00 np0005592157 romantic_snyder[293708]: --> relative data size: 1.0
Jan 22 09:34:00 np0005592157 romantic_snyder[293708]: --> All data devices are unavailable
Jan 22 09:34:00 np0005592157 systemd[1]: libpod-4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420.scope: Deactivated successfully.
Jan 22 09:34:00 np0005592157 podman[293691]: 2026-01-22 14:34:00.88720952 +0000 UTC m=+1.044290544 container died 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:34:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-931d9d0e56dd91694005f311841ebd415b50a60b688e7a5648b675a7c717e8a4-merged.mount: Deactivated successfully.
Jan 22 09:34:00 np0005592157 podman[293691]: 2026-01-22 14:34:00.960785529 +0000 UTC m=+1.117866583 container remove 4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:34:00 np0005592157 systemd[1]: libpod-conmon-4a734e6629fd737be7a2a4a0257c426193c77da4365fc7a39dd1b92beecb5420.scope: Deactivated successfully.
Jan 22 09:34:00 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:34:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.679249932 +0000 UTC m=+0.034847948 container create 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:34:01 np0005592157 systemd[1]: Started libpod-conmon-93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179.scope.
Jan 22 09:34:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.663800127 +0000 UTC m=+0.019398153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.765466875 +0000 UTC m=+0.121064971 container init 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.772365387 +0000 UTC m=+0.127963403 container start 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.775419653 +0000 UTC m=+0.131017699 container attach 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:34:01 np0005592157 wizardly_proskuriakova[293941]: 167 167
Jan 22 09:34:01 np0005592157 systemd[1]: libpod-93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179.scope: Deactivated successfully.
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.777804452 +0000 UTC m=+0.133402458 container died 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:34:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0a79728fc69f29fcce80095f380fc0667aaf5cc888cad7b3c78e1de4b403e329-merged.mount: Deactivated successfully.
Jan 22 09:34:01 np0005592157 podman[293925]: 2026-01-22 14:34:01.81796953 +0000 UTC m=+0.173567566 container remove 93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:34:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:01 np0005592157 systemd[1]: libpod-conmon-93eb226890cd936f00dbe54601da5dc7a0e892b2bfeb79accc397be6537a4179.scope: Deactivated successfully.
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.020913636 +0000 UTC m=+0.064299560 container create 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:34:02 np0005592157 systemd[1]: Started libpod-conmon-0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a.scope.
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:01.994328915 +0000 UTC m=+0.037714899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:34:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:34:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a6efb41209083e9019ab42ccdc27fc36b831b8e21a7677aa0608f592a4dc6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a6efb41209083e9019ab42ccdc27fc36b831b8e21a7677aa0608f592a4dc6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a6efb41209083e9019ab42ccdc27fc36b831b8e21a7677aa0608f592a4dc6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a6efb41209083e9019ab42ccdc27fc36b831b8e21a7677aa0608f592a4dc6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.125375873 +0000 UTC m=+0.168761817 container init 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.136360086 +0000 UTC m=+0.179746020 container start 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.141095834 +0000 UTC m=+0.184481768 container attach 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:34:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 22 09:34:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:02 np0005592157 silly_feistel[293982]: {
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:    "0": [
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:        {
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "devices": [
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "/dev/loop3"
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            ],
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "lv_name": "ceph_lv0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "lv_size": "7511998464",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "name": "ceph_lv0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "tags": {
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.cluster_name": "ceph",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.crush_device_class": "",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.encrypted": "0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.osd_id": "0",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.type": "block",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:                "ceph.vdo": "0"
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            },
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "type": "block",
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:            "vg_name": "ceph_vg0"
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:        }
Jan 22 09:34:02 np0005592157 silly_feistel[293982]:    ]
Jan 22 09:34:02 np0005592157 silly_feistel[293982]: }
Jan 22 09:34:02 np0005592157 systemd[1]: libpod-0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a.scope: Deactivated successfully.
Jan 22 09:34:02 np0005592157 conmon[293982]: conmon 0ef9ce06c2183a4d957d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a.scope/container/memory.events
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.864169071 +0000 UTC m=+0.907554985 container died 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:34:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-81a6efb41209083e9019ab42ccdc27fc36b831b8e21a7677aa0608f592a4dc6f-merged.mount: Deactivated successfully.
Jan 22 09:34:02 np0005592157 podman[293965]: 2026-01-22 14:34:02.924320437 +0000 UTC m=+0.967706331 container remove 0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:34:02 np0005592157 systemd[1]: libpod-conmon-0ef9ce06c2183a4d957d198087a76dee3d323a51e496cddc3a2d07e3b0b27f3a.scope: Deactivated successfully.
Jan 22 09:34:02 np0005592157 podman[293991]: 2026-01-22 14:34:02.961397479 +0000 UTC m=+0.065785457 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:34:03 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:03 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:03 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:03.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.640679517 +0000 UTC m=+0.060953376 container create b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:34:03 np0005592157 systemd[1]: Started libpod-conmon-b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b.scope.
Jan 22 09:34:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.70673852 +0000 UTC m=+0.127012399 container init b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.622721651 +0000 UTC m=+0.042995520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.718463251 +0000 UTC m=+0.138737100 container start b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:34:03 np0005592157 systemd[1]: libpod-b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b.scope: Deactivated successfully.
Jan 22 09:34:03 np0005592157 brave_banzai[294177]: 167 167
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.722513772 +0000 UTC m=+0.142787671 container attach b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:34:03 np0005592157 conmon[294177]: conmon b7bea8bcf709b0e98e58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b.scope/container/memory.events
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.723907955 +0000 UTC m=+0.144181804 container died b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:34:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e705abe2057c1a4ff4ed5962e444a691c7f792036e421e0200b80bc89c74c4f4-merged.mount: Deactivated successfully.
Jan 22 09:34:03 np0005592157 podman[294161]: 2026-01-22 14:34:03.758500265 +0000 UTC m=+0.178774114 container remove b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 09:34:03 np0005592157 systemd[1]: libpod-conmon-b7bea8bcf709b0e98e58e5179569f7ee007d766837463bfbc62393a00ad3082b.scope: Deactivated successfully.
Jan 22 09:34:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:03 np0005592157 podman[294201]: 2026-01-22 14:34:03.967264846 +0000 UTC m=+0.069104049 container create b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:34:04 np0005592157 systemd[1]: Started libpod-conmon-b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898.scope.
Jan 22 09:34:04 np0005592157 podman[294201]: 2026-01-22 14:34:03.937412584 +0000 UTC m=+0.039251867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:34:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:34:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8c672de526d33dceac5c43971ef690abe2b112501c02ba15dbf431b64d71aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8c672de526d33dceac5c43971ef690abe2b112501c02ba15dbf431b64d71aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8c672de526d33dceac5c43971ef690abe2b112501c02ba15dbf431b64d71aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8c672de526d33dceac5c43971ef690abe2b112501c02ba15dbf431b64d71aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:04 np0005592157 podman[294201]: 2026-01-22 14:34:04.078695746 +0000 UTC m=+0.180535029 container init b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:34:04 np0005592157 podman[294201]: 2026-01-22 14:34:04.090265704 +0000 UTC m=+0.192104927 container start b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:34:04 np0005592157 podman[294201]: 2026-01-22 14:34:04.095132095 +0000 UTC m=+0.196971318 container attach b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 32 op/s
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012145507630746325 of space, bias 1.0, pg target 3.6436522892238976 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010796168341708543 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:34:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]: {
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:        "osd_id": 0,
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:        "type": "bluestore"
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]:    }
Jan 22 09:34:04 np0005592157 hopeful_jackson[294217]: }
Jan 22 09:34:04 np0005592157 systemd[1]: libpod-b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898.scope: Deactivated successfully.
Jan 22 09:34:04 np0005592157 podman[294201]: 2026-01-22 14:34:04.999166251 +0000 UTC m=+1.101005484 container died b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:34:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8c8c672de526d33dceac5c43971ef690abe2b112501c02ba15dbf431b64d71aa-merged.mount: Deactivated successfully.
Jan 22 09:34:05 np0005592157 podman[294201]: 2026-01-22 14:34:05.0638658 +0000 UTC m=+1.165705003 container remove b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:34:05 np0005592157 systemd[1]: libpod-conmon-b7e110c4ee7a2e83f7bfc573ca071ae2b0ef499cfc1960a25e76ba8b5bf83898.scope: Deactivated successfully.
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f0b8b5c5-796a-49a6-a6d6-4d2caaff22fd does not exist
Jan 22 09:34:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 320f6831-8118-4285-bd4c-1aa52f330b65 does not exist
Jan 22 09:34:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 631c94bd-3711-4a88-9a33-97ec51221dad does not exist
Jan 22 09:34:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:05.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 09:34:06 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:07.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:07 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 09:34:08 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:08 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:09 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:09.879 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:34:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:09.882 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:34:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 22 09:34:10 np0005592157 podman[294303]: 2026-01-22 14:34:10.429143463 +0000 UTC m=+0.146135765 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:34:10 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:11.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:11 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 27 op/s
Jan 22 09:34:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:12 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:13.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:13 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:13 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 640 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 409 KiB/s wr, 30 op/s
Jan 22 09:34:14 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:15.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:15 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.885106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456885226, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 251, "total_data_size": 1641281, "memory_usage": 1664576, "flush_reason": "Manual Compaction"}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456899449, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 1594051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53997, "largest_seqno": 55228, "table_properties": {"data_size": 1588533, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1576433, "raw_average_value_size": 2314, "num_data_blocks": 118, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092378, "oldest_key_time": 1769092378, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 14428 microseconds, and 6461 cpu microseconds.
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899571) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 1594051 bytes OK
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899617) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.902520) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.902546) EVENT_LOG_v1 {"time_micros": 1769092456902540, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.902565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1635610, prev total WAL file size 1635610, number of live WAL files 2.
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(1556KB)], [119(8825KB)]
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456903468, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10631701, "oldest_snapshot_seqno": -1}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 10247 keys, 8931406 bytes, temperature: kUnknown
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456966533, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 8931406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8875600, "index_size": 29070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 277482, "raw_average_key_size": 27, "raw_value_size": 8701340, "raw_average_value_size": 849, "num_data_blocks": 1083, "num_entries": 10247, "num_filter_entries": 10247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.967021) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8931406 bytes
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.968531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.2 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 10768, records dropped: 521 output_compression: NoCompression
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.968562) EVENT_LOG_v1 {"time_micros": 1769092456968547, "job": 72, "event": "compaction_finished", "compaction_time_micros": 63211, "compaction_time_cpu_micros": 27253, "output_level": 6, "num_output_files": 1, "total_output_size": 8931406, "num_input_records": 10768, "num_output_records": 10247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456969299, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456972213, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.972258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.972264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.972267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.972270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:34:16.972273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"} v 0) v1
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 22 09:34:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:18 np0005592157 ceph-mon[74359]: from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 09:34:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:19 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:19.884 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:20 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 22 09:34:21 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:21 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:21.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 22 09:34:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:22 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:23 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:23.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 23 KiB/s wr, 186 op/s
Jan 22 09:34:24 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:25.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:25.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 1.9 MiB/s wr, 294 op/s
Jan 22 09:34:26 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:27 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:27.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.6 MiB/s wr, 242 op/s
Jan 22 09:34:28 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:29 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:29.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 715 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.5 MiB/s wr, 246 op/s
Jan 22 09:34:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:34:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532096136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:34:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:31 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3432 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1329 writes, 4465 keys, 1329 commit groups, 1.0 writes per commit group, ingest: 3.76 MB, 0.01 MB/s#012Interval WAL: 1329 writes, 577 syncs, 2.30 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:34:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:31.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 745 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 227 op/s
Jan 22 09:34:32 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:33 np0005592157 podman[294392]: 2026-01-22 14:34:33.353357873 +0000 UTC m=+0.074203785 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:34:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:33.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Jan 22 09:34:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 22 09:34:36 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:37 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 09:34:38 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:39 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:39.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 09:34:40 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:41 np0005592157 podman[294441]: 2026-01-22 14:34:41.203481105 +0000 UTC m=+0.142651435 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:34:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:41.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:41 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 22 09:34:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:42 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:43 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:43 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:34:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:34:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 88 KiB/s wr, 14 op/s
Jan 22 09:34:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:34:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:45.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:45 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:45.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:34:46 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:34:47
Jan 22 09:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root']
Jan 22 09:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:34:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:47.610 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:47.610 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:34:47.610 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:47 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:47.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 22 09:34:48 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:49.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:49 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:49.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 09:34:50 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:51 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:51.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 09:34:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:52 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:53.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:34:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:34:54 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 09:34:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:55 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:55.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:55.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 09:34:56 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:57 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:34:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:57.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:34:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 09:34:58 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:58 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:59 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:34:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:59.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 2.8 KiB/s wr, 0 op/s
Jan 22 09:35:00 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:01 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:01.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:01.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 1.9 KiB/s wr, 1 op/s
Jan 22 09:35:02 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:03 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:03 np0005592157 ceph-mon[74359]: Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:03.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:35:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 3.2 KiB/s wr, 1 op/s
Jan 22 09:35:04 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:04 np0005592157 podman[294555]: 2026-01-22 14:35:04.364982126 +0000 UTC m=+0.094512079 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318372824772009 of space, bias 1.0, pg target 4.2955118474316025 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2928284361622929 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:35:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:35:05 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 09:35:06 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:35:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:35:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.166144991 +0000 UTC m=+0.082708366 container create 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:35:07 np0005592157 systemd[1]: Started libpod-conmon-3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee.scope.
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.12747677 +0000 UTC m=+0.044040235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.272492853 +0000 UTC m=+0.189056268 container init 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.285530237 +0000 UTC m=+0.202093632 container start 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.290363087 +0000 UTC m=+0.206926502 container attach 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:35:07 np0005592157 kind_mendeleev[294860]: 167 167
Jan 22 09:35:07 np0005592157 systemd[1]: libpod-3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee.scope: Deactivated successfully.
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.298882248 +0000 UTC m=+0.215445653 container died 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:35:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d4cd70918b5d105be66b921da48db4a2e082dfebcaba5e24488c319743dd178a-merged.mount: Deactivated successfully.
Jan 22 09:35:07 np0005592157 podman[294844]: 2026-01-22 14:35:07.351559157 +0000 UTC m=+0.268122542 container remove 3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:35:07 np0005592157 systemd[1]: libpod-conmon-3fdab8a4b32d9f7bc1f0e558065b8c3b9dcc87df85495a355f49cb6adfd8f8ee.scope: Deactivated successfully.
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:07 np0005592157 podman[294881]: 2026-01-22 14:35:07.577464489 +0000 UTC m=+0.068828121 container create 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:35:07 np0005592157 systemd[1]: Started libpod-conmon-3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541.scope.
Jan 22 09:35:07 np0005592157 podman[294881]: 2026-01-22 14:35:07.546190072 +0000 UTC m=+0.037553774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2a833766723c4dde8841ca8feecfe2fdb837d9c2f01b05a85fe2e5d60b29b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2a833766723c4dde8841ca8feecfe2fdb837d9c2f01b05a85fe2e5d60b29b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2a833766723c4dde8841ca8feecfe2fdb837d9c2f01b05a85fe2e5d60b29b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2a833766723c4dde8841ca8feecfe2fdb837d9c2f01b05a85fe2e5d60b29b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:07 np0005592157 podman[294881]: 2026-01-22 14:35:07.676469908 +0000 UTC m=+0.167833520 container init 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:35:07 np0005592157 podman[294881]: 2026-01-22 14:35:07.685037071 +0000 UTC m=+0.176400683 container start 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:35:07 np0005592157 podman[294881]: 2026-01-22 14:35:07.688636731 +0000 UTC m=+0.180000353 container attach 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:35:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:07.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:08 np0005592157 bold_morse[294899]: [
Jan 22 09:35:08 np0005592157 bold_morse[294899]:    {
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "available": false,
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "ceph_device": false,
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "lsm_data": {},
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "lvs": [],
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "path": "/dev/sr0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "rejected_reasons": [
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "Insufficient space (<5GB)",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "Has a FileSystem"
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        ],
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        "sys_api": {
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "actuators": null,
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "device_nodes": "sr0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "devname": "sr0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "human_readable_size": "482.00 KB",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "id_bus": "ata",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "model": "QEMU DVD-ROM",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "nr_requests": "2",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "parent": "/dev/sr0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "partitions": {},
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "path": "/dev/sr0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "removable": "1",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "rev": "2.5+",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "ro": "0",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "rotational": "1",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "sas_address": "",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "sas_device_handle": "",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "scheduler_mode": "mq-deadline",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "sectors": 0,
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "sectorsize": "2048",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "size": 493568.0,
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "support_discard": "2048",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "type": "disk",
Jan 22 09:35:08 np0005592157 bold_morse[294899]:            "vendor": "QEMU"
Jan 22 09:35:08 np0005592157 bold_morse[294899]:        }
Jan 22 09:35:08 np0005592157 bold_morse[294899]:    }
Jan 22 09:35:08 np0005592157 bold_morse[294899]: ]
Jan 22 09:35:08 np0005592157 systemd[1]: libpod-3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541.scope: Deactivated successfully.
Jan 22 09:35:08 np0005592157 systemd[1]: libpod-3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541.scope: Consumed 1.159s CPU time.
Jan 22 09:35:08 np0005592157 conmon[294899]: conmon 3269eb9ab4e760ca1a72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541.scope/container/memory.events
Jan 22 09:35:08 np0005592157 podman[294881]: 2026-01-22 14:35:08.84515978 +0000 UTC m=+1.336523412 container died 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:35:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bc2a833766723c4dde8841ca8feecfe2fdb837d9c2f01b05a85fe2e5d60b29b0-merged.mount: Deactivated successfully.
Jan 22 09:35:08 np0005592157 podman[294881]: 2026-01-22 14:35:08.896061805 +0000 UTC m=+1.387425417 container remove 3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:35:08 np0005592157 systemd[1]: libpod-conmon-3269eb9ab4e760ca1a72bf56a9f6a56df9c820163e5706fd953524938dbd9541.scope: Deactivated successfully.
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:35:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:09 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 43bb3c83-d720-4efa-ad39-3a3c16a1b80d does not exist
Jan 22 09:35:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d8f94d3b-ddb3-4777-8028-4a8281454a0a does not exist
Jan 22 09:35:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f8863b83-1342-4967-8a35-0390f1da90d5 does not exist
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:35:10 np0005592157 podman[296131]: 2026-01-22 14:35:10.951823994 +0000 UTC m=+0.042521537 container create 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:35:10 np0005592157 systemd[1]: Started libpod-conmon-81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf.scope.
Jan 22 09:35:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:10.934492263 +0000 UTC m=+0.025189806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:11.043129512 +0000 UTC m=+0.133827055 container init 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:11.0506664 +0000 UTC m=+0.141363923 container start 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:11.054206457 +0000 UTC m=+0.144903980 container attach 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:35:11 np0005592157 brave_perlman[296147]: 167 167
Jan 22 09:35:11 np0005592157 systemd[1]: libpod-81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf.scope: Deactivated successfully.
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:11.056647588 +0000 UTC m=+0.147345111 container died 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:35:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ba60e0a5ff981bb5cd4a0a6721552554faa949b51d309b287bedafa09c4e4177-merged.mount: Deactivated successfully.
Jan 22 09:35:11 np0005592157 podman[296131]: 2026-01-22 14:35:11.099205455 +0000 UTC m=+0.189903008 container remove 81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:35:11 np0005592157 systemd[1]: libpod-conmon-81e7a8e010bf83914a100f2c096b9c6ae93930509971d51ceb8de84066d651cf.scope: Deactivated successfully.
Jan 22 09:35:11 np0005592157 podman[296171]: 2026-01-22 14:35:11.266503121 +0000 UTC m=+0.039598524 container create 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:35:11 np0005592157 systemd[1]: Started libpod-conmon-75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad.scope.
Jan 22 09:35:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:11 np0005592157 podman[296171]: 2026-01-22 14:35:11.249497379 +0000 UTC m=+0.022592812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:11 np0005592157 podman[296171]: 2026-01-22 14:35:11.35219638 +0000 UTC m=+0.125291833 container init 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:35:11 np0005592157 podman[296171]: 2026-01-22 14:35:11.361001589 +0000 UTC m=+0.134097012 container start 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 09:35:11 np0005592157 podman[296171]: 2026-01-22 14:35:11.364820614 +0000 UTC m=+0.137916047 container attach 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:35:11 np0005592157 podman[296182]: 2026-01-22 14:35:11.381242532 +0000 UTC m=+0.107889132 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:35:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:11 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:35:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:35:12 np0005592157 beautiful_mayer[296201]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:35:12 np0005592157 beautiful_mayer[296201]: --> relative data size: 1.0
Jan 22 09:35:12 np0005592157 beautiful_mayer[296201]: --> All data devices are unavailable
Jan 22 09:35:12 np0005592157 systemd[1]: libpod-75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad.scope: Deactivated successfully.
Jan 22 09:35:12 np0005592157 podman[296171]: 2026-01-22 14:35:12.166383355 +0000 UTC m=+0.939478798 container died 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:35:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 22 09:35:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-77159cfe5b24d4684665976566dff98788d4914b2beb942e94690d0860e00f99-merged.mount: Deactivated successfully.
Jan 22 09:35:12 np0005592157 podman[296171]: 2026-01-22 14:35:12.229025241 +0000 UTC m=+1.002120654 container remove 75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:35:12 np0005592157 systemd[1]: libpod-conmon-75e46285ee2bcb40e7acd7d889d754de45324f68418ff84c36fb7cd5636c0aad.scope: Deactivated successfully.
Jan 22 09:35:12 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:12 np0005592157 podman[296386]: 2026-01-22 14:35:12.924145979 +0000 UTC m=+0.054605207 container create c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:35:12 np0005592157 systemd[1]: Started libpod-conmon-c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25.scope.
Jan 22 09:35:12 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:12 np0005592157 podman[296386]: 2026-01-22 14:35:12.897886747 +0000 UTC m=+0.028346025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:13 np0005592157 podman[296386]: 2026-01-22 14:35:13.002506786 +0000 UTC m=+0.132966064 container init c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:35:13 np0005592157 podman[296386]: 2026-01-22 14:35:13.013314895 +0000 UTC m=+0.143774093 container start c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:35:13 np0005592157 podman[296386]: 2026-01-22 14:35:13.016999516 +0000 UTC m=+0.147458744 container attach c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 09:35:13 np0005592157 fervent_mclean[296402]: 167 167
Jan 22 09:35:13 np0005592157 systemd[1]: libpod-c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25.scope: Deactivated successfully.
Jan 22 09:35:13 np0005592157 podman[296386]: 2026-01-22 14:35:13.021051077 +0000 UTC m=+0.151510275 container died c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:35:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1902b946e302f09f837e72e090958ce181c878e2ad94c7dcd1400aa66a4f74b6-merged.mount: Deactivated successfully.
Jan 22 09:35:13 np0005592157 podman[296386]: 2026-01-22 14:35:13.063791079 +0000 UTC m=+0.194250267 container remove c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:35:13 np0005592157 systemd[1]: libpod-conmon-c4079db23447108b901bd55bfbf3387edf2e298dc9e5d9c5717c08553d21df25.scope: Deactivated successfully.
Jan 22 09:35:13 np0005592157 podman[296424]: 2026-01-22 14:35:13.285020704 +0000 UTC m=+0.076187753 container create ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:35:13 np0005592157 systemd[1]: Started libpod-conmon-ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4.scope.
Jan 22 09:35:13 np0005592157 podman[296424]: 2026-01-22 14:35:13.2542555 +0000 UTC m=+0.045422629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aba420e6579c737ba586543180c1069243d6c81c611c6029c2a3325824c68bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aba420e6579c737ba586543180c1069243d6c81c611c6029c2a3325824c68bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aba420e6579c737ba586543180c1069243d6c81c611c6029c2a3325824c68bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aba420e6579c737ba586543180c1069243d6c81c611c6029c2a3325824c68bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:13 np0005592157 podman[296424]: 2026-01-22 14:35:13.392846313 +0000 UTC m=+0.184013392 container init ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:35:13 np0005592157 podman[296424]: 2026-01-22 14:35:13.40802055 +0000 UTC m=+0.199187639 container start ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:35:13 np0005592157 podman[296424]: 2026-01-22 14:35:13.412960133 +0000 UTC m=+0.204127232 container attach ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:35:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:13.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:13 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:13 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:13.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]: {
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:    "0": [
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:        {
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "devices": [
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "/dev/loop3"
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            ],
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "lv_name": "ceph_lv0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "lv_size": "7511998464",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "name": "ceph_lv0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "tags": {
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.cluster_name": "ceph",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.crush_device_class": "",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.encrypted": "0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.osd_id": "0",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.type": "block",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:                "ceph.vdo": "0"
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            },
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "type": "block",
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:            "vg_name": "ceph_vg0"
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:        }
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]:    ]
Jan 22 09:35:14 np0005592157 xenodochial_brahmagupta[296441]: }
Jan 22 09:35:14 np0005592157 systemd[1]: libpod-ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4.scope: Deactivated successfully.
Jan 22 09:35:14 np0005592157 podman[296424]: 2026-01-22 14:35:14.256241651 +0000 UTC m=+1.047408780 container died ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:35:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4aba420e6579c737ba586543180c1069243d6c81c611c6029c2a3325824c68bf-merged.mount: Deactivated successfully.
Jan 22 09:35:14 np0005592157 podman[296424]: 2026-01-22 14:35:14.330903136 +0000 UTC m=+1.122070205 container remove ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brahmagupta, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:35:14 np0005592157 systemd[1]: libpod-conmon-ffa63832fe7a9d293642d98c832d5baeb6f43ff7800b2ab8da79d742b80a13d4.scope: Deactivated successfully.
Jan 22 09:35:14 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:14 np0005592157 podman[296605]: 2026-01-22 14:35:14.973590262 +0000 UTC m=+0.037546144 container create 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:35:15 np0005592157 systemd[1]: Started libpod-conmon-0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677.scope.
Jan 22 09:35:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:15.050178694 +0000 UTC m=+0.114134626 container init 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:14.95943028 +0000 UTC m=+0.023386182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:15.057273831 +0000 UTC m=+0.121229723 container start 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:15.061127676 +0000 UTC m=+0.125083558 container attach 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:35:15 np0005592157 brave_dhawan[296622]: 167 167
Jan 22 09:35:15 np0005592157 systemd[1]: libpod-0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677.scope: Deactivated successfully.
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:15.063776162 +0000 UTC m=+0.127732054 container died 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:35:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b98700ffe127ad53badbeefe9fc2bb7d7844f7b14219c90a43830335e6c1af70-merged.mount: Deactivated successfully.
Jan 22 09:35:15 np0005592157 podman[296605]: 2026-01-22 14:35:15.10835396 +0000 UTC m=+0.172309852 container remove 0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:35:15 np0005592157 systemd[1]: libpod-conmon-0ea26357f58631f35afcddd2a161b0d055d82ba948ff475fa5c2ed459b560677.scope: Deactivated successfully.
Jan 22 09:35:15 np0005592157 podman[296645]: 2026-01-22 14:35:15.27058849 +0000 UTC m=+0.037392160 container create 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:35:15 np0005592157 systemd[1]: Started libpod-conmon-252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92.scope.
Jan 22 09:35:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:35:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e6ea77b91ac8426419daa7ea558a6b0c3a99461edffe2eef512fc33444410b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e6ea77b91ac8426419daa7ea558a6b0c3a99461edffe2eef512fc33444410b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e6ea77b91ac8426419daa7ea558a6b0c3a99461edffe2eef512fc33444410b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e6ea77b91ac8426419daa7ea558a6b0c3a99461edffe2eef512fc33444410b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:15 np0005592157 podman[296645]: 2026-01-22 14:35:15.342425763 +0000 UTC m=+0.109229433 container init 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:35:15 np0005592157 podman[296645]: 2026-01-22 14:35:15.349712074 +0000 UTC m=+0.116515774 container start 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:35:15 np0005592157 podman[296645]: 2026-01-22 14:35:15.254636544 +0000 UTC m=+0.021440244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:15 np0005592157 podman[296645]: 2026-01-22 14:35:15.354123344 +0000 UTC m=+0.120927014 container attach 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:35:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:15.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:15 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:15.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]: {
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:        "osd_id": 0,
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:        "type": "bluestore"
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]:    }
Jan 22 09:35:16 np0005592157 practical_archimedes[296661]: }
Jan 22 09:35:16 np0005592157 systemd[1]: libpod-252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92.scope: Deactivated successfully.
Jan 22 09:35:16 np0005592157 podman[296645]: 2026-01-22 14:35:16.156674921 +0000 UTC m=+0.923478591 container died 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:35:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e6e6ea77b91ac8426419daa7ea558a6b0c3a99461edffe2eef512fc33444410b-merged.mount: Deactivated successfully.
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 22 09:35:16 np0005592157 podman[296645]: 2026-01-22 14:35:16.219248856 +0000 UTC m=+0.986052526 container remove 252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:35:16 np0005592157 systemd[1]: libpod-conmon-252935aab0c38fc6415a3a2cf827a9fad8163139f9995d329c409031326e3f92.scope: Deactivated successfully.
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ba956338-2304-4035-a3bd-96940adee84e does not exist
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3e0e27d8-0ce5-4fb1-ba60-e47970082644 does not exist
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 905cedf6-2247-4146-bc5e-9b7f25a687b6 does not exist
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:17 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:17 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:18 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:19 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:19.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:20 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:21.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:22 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:23.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:24 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:25.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:25 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:25.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:26 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:27 np0005592157 ceph-mon[74359]: Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:27 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:28 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:29 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:30.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:30 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:31 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:32 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:33 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:33 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:34 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:35 np0005592157 podman[296809]: 2026-01-22 14:35:35.350511218 +0000 UTC m=+0.083859064 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 09:35:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:35.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:35 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:35:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:35:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:36 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:37 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:35:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:35:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:38 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:39 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:40 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:41 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:42 np0005592157 podman[296833]: 2026-01-22 14:35:42.371025861 +0000 UTC m=+0.106948607 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:35:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:43 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:43 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:35:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:43.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:35:44 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 09:35:45 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:35:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:46.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:35:46 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:35:47 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:35:47
Jan 22 09:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 22 09:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:35:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:47.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:35:47.611 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:35:47.612 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:35:47.612 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:35:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:48.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:48 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:48 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:49 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:50.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:50 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:51 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:35:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:35:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:52 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:53 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:53 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:54 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:55 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:55.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:56 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:35:58 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:58 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:59 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:35:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:59.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:00 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:01 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:01.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:02.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.543355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562543618, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1565, "num_deletes": 255, "total_data_size": 2163722, "memory_usage": 2204176, "flush_reason": "Manual Compaction"}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562558537, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2109887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55229, "largest_seqno": 56793, "table_properties": {"data_size": 2103097, "index_size": 3605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17067, "raw_average_key_size": 20, "raw_value_size": 2088285, "raw_average_value_size": 2546, "num_data_blocks": 155, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092457, "oldest_key_time": 1769092457, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 15268 microseconds, and 4964 cpu microseconds.
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558621) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2109887 bytes OK
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558646) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560342) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560360) EVENT_LOG_v1 {"time_micros": 1769092562560354, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560381) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2156820, prev total WAL file size 2156820, number of live WAL files 2.
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.561173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(2060KB)], [122(8722KB)]
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562561211, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11041293, "oldest_snapshot_seqno": -1}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 10536 keys, 10877934 bytes, temperature: kUnknown
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562622392, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 10877934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818526, "index_size": 31968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 285080, "raw_average_key_size": 27, "raw_value_size": 10637550, "raw_average_value_size": 1009, "num_data_blocks": 1202, "num_entries": 10536, "num_filter_entries": 10536, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.622700) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10877934 bytes
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.624199) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.1 rd, 177.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 11067, records dropped: 531 output_compression: NoCompression
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.624216) EVENT_LOG_v1 {"time_micros": 1769092562624208, "job": 74, "event": "compaction_finished", "compaction_time_micros": 61299, "compaction_time_cpu_micros": 24043, "output_level": 6, "num_output_files": 1, "total_output_size": 10877934, "num_input_records": 11067, "num_output_records": 10536, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562624739, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562626666, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.561134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.626736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.626742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.626744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.626745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:02.626746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:03.043 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:03.044 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:03 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:03 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:03.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:04.047 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:04.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2928284361622929 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:36:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:36:04 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:05 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:05.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:06.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:06 np0005592157 podman[296972]: 2026-01-22 14:36:06.359487006 +0000 UTC m=+0.084961672 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 09:36:06 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:07.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:07 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:08.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:08 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:08 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:09.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:09 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:10.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:10 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:11.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:11 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:12.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:12 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592157 podman[296995]: 2026-01-22 14:36:13.379856552 +0000 UTC m=+0.110505076 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 09:36:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:13 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:14.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:14 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:14 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:15 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:16.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:16 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:36:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:36:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:17.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1f67e08c-0bb3-493d-a728-c8d7853e1b4e does not exist
Jan 22 09:36:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ea60ebd8-b0c5-4ea3-bb84-08ee83110db2 does not exist
Jan 22 09:36:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 04ea7068-b72f-4437-ad30-155cba4f278d does not exist
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:36:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.209061131 +0000 UTC m=+0.066001911 container create ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:36:19 np0005592157 systemd[1]: Started libpod-conmon-ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3.scope.
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.187138686 +0000 UTC m=+0.044079446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.313336281 +0000 UTC m=+0.170277041 container init ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.325677918 +0000 UTC m=+0.182618658 container start ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.330256211 +0000 UTC m=+0.187197051 container attach ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:36:19 np0005592157 hungry_turing[297432]: 167 167
Jan 22 09:36:19 np0005592157 systemd[1]: libpod-ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3.scope: Deactivated successfully.
Jan 22 09:36:19 np0005592157 conmon[297432]: conmon ec59f16c82b7394c794a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3.scope/container/memory.events
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.336675451 +0000 UTC m=+0.193616201 container died ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:36:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b7e2d2905778ea3c99f9bfa0aea4ce60fdfc1473c155c6e7ce15e4c13170e6f8-merged.mount: Deactivated successfully.
Jan 22 09:36:19 np0005592157 podman[297416]: 2026-01-22 14:36:19.384227942 +0000 UTC m=+0.241168702 container remove ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:36:19 np0005592157 systemd[1]: libpod-conmon-ec59f16c82b7394c794a601ecd22ab840c933ec561b4a25ce72deb0cdffecbd3.scope: Deactivated successfully.
Jan 22 09:36:19 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:36:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:19.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:19 np0005592157 podman[297455]: 2026-01-22 14:36:19.610135124 +0000 UTC m=+0.063331304 container create 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:36:19 np0005592157 systemd[1]: Started libpod-conmon-9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7.scope.
Jan 22 09:36:19 np0005592157 podman[297455]: 2026-01-22 14:36:19.581457222 +0000 UTC m=+0.034653442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:19 np0005592157 podman[297455]: 2026-01-22 14:36:19.723296745 +0000 UTC m=+0.176492925 container init 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:36:19 np0005592157 podman[297455]: 2026-01-22 14:36:19.735001845 +0000 UTC m=+0.188197995 container start 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:36:19 np0005592157 podman[297455]: 2026-01-22 14:36:19.74404046 +0000 UTC m=+0.197236610 container attach 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 09:36:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:20 np0005592157 hopeful_hellman[297473]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:36:20 np0005592157 hopeful_hellman[297473]: --> relative data size: 1.0
Jan 22 09:36:20 np0005592157 hopeful_hellman[297473]: --> All data devices are unavailable
Jan 22 09:36:20 np0005592157 systemd[1]: libpod-9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7.scope: Deactivated successfully.
Jan 22 09:36:20 np0005592157 podman[297455]: 2026-01-22 14:36:20.505840564 +0000 UTC m=+0.959036744 container died 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:36:20 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-31af2424157a9629b7751b7a8c578e1844d01e4c65be6b6e064eea7bcba7cec4-merged.mount: Deactivated successfully.
Jan 22 09:36:20 np0005592157 podman[297455]: 2026-01-22 14:36:20.586564279 +0000 UTC m=+1.039760429 container remove 9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hellman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:36:20 np0005592157 systemd[1]: libpod-conmon-9ca3ff6ec3e0df7845139cf6041f48bcba9e6d07517ddec35ca4c841c2787ba7.scope: Deactivated successfully.
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.431359506 +0000 UTC m=+0.064351670 container create 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:36:21 np0005592157 systemd[1]: Started libpod-conmon-491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c.scope.
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.404354335 +0000 UTC m=+0.037346569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.521362582 +0000 UTC m=+0.154354786 container init 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.53014573 +0000 UTC m=+0.163137884 container start 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:36:21 np0005592157 upbeat_nash[297657]: 167 167
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.534247512 +0000 UTC m=+0.167239676 container attach 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:36:21 np0005592157 systemd[1]: libpod-491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c.scope: Deactivated successfully.
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.534768665 +0000 UTC m=+0.167760819 container died 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:36:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:21.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-37657cb6367b0babe6bc6ca1e63d8581480ede1a35ed4e12466fd38053b5a91a-merged.mount: Deactivated successfully.
Jan 22 09:36:21 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:21 np0005592157 podman[297640]: 2026-01-22 14:36:21.570976054 +0000 UTC m=+0.203968198 container remove 491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nash, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:36:21 np0005592157 systemd[1]: libpod-conmon-491967d04990532ec6603eb3e3012182cecf3add82b105540bbc85f22a640f2c.scope: Deactivated successfully.
Jan 22 09:36:21 np0005592157 podman[297681]: 2026-01-22 14:36:21.708341957 +0000 UTC m=+0.037075212 container create 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:36:21 np0005592157 systemd[1]: Started libpod-conmon-29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db.scope.
Jan 22 09:36:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad438d393f63da2697b368a17b8e7ed8610c0bc53102c0e53764f3c1350bcdc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad438d393f63da2697b368a17b8e7ed8610c0bc53102c0e53764f3c1350bcdc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad438d393f63da2697b368a17b8e7ed8610c0bc53102c0e53764f3c1350bcdc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad438d393f63da2697b368a17b8e7ed8610c0bc53102c0e53764f3c1350bcdc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:21 np0005592157 podman[297681]: 2026-01-22 14:36:21.781529685 +0000 UTC m=+0.110262960 container init 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:36:21 np0005592157 podman[297681]: 2026-01-22 14:36:21.691533699 +0000 UTC m=+0.020266974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:21 np0005592157 podman[297681]: 2026-01-22 14:36:21.791539814 +0000 UTC m=+0.120273069 container start 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:36:21 np0005592157 podman[297681]: 2026-01-22 14:36:21.794889297 +0000 UTC m=+0.123622552 container attach 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:36:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]: {
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:    "0": [
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:        {
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "devices": [
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "/dev/loop3"
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            ],
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "lv_name": "ceph_lv0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "lv_size": "7511998464",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "name": "ceph_lv0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "tags": {
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.cluster_name": "ceph",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.crush_device_class": "",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.encrypted": "0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.osd_id": "0",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.type": "block",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:                "ceph.vdo": "0"
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            },
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "type": "block",
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:            "vg_name": "ceph_vg0"
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:        }
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]:    ]
Jan 22 09:36:22 np0005592157 strange_sanderson[297697]: }
Jan 22 09:36:22 np0005592157 systemd[1]: libpod-29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db.scope: Deactivated successfully.
Jan 22 09:36:22 np0005592157 podman[297681]: 2026-01-22 14:36:22.615062402 +0000 UTC m=+0.943795697 container died 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:36:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ad438d393f63da2697b368a17b8e7ed8610c0bc53102c0e53764f3c1350bcdc2-merged.mount: Deactivated successfully.
Jan 22 09:36:22 np0005592157 podman[297681]: 2026-01-22 14:36:22.685199684 +0000 UTC m=+1.013932939 container remove 29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:36:22 np0005592157 systemd[1]: libpod-conmon-29afa83658757e856afe434e17e76aff5a9e1c6b7369edf9b80fc5d3567179db.scope: Deactivated successfully.
Jan 22 09:36:23 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:23 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.317700416 +0000 UTC m=+0.062063462 container create 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:36:23 np0005592157 systemd[1]: Started libpod-conmon-4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099.scope.
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.294340306 +0000 UTC m=+0.038703432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.413444234 +0000 UTC m=+0.157807290 container init 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.42333159 +0000 UTC m=+0.167694616 container start 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.426993831 +0000 UTC m=+0.171356877 container attach 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:36:23 np0005592157 flamboyant_visvesvaraya[297925]: 167 167
Jan 22 09:36:23 np0005592157 systemd[1]: libpod-4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099.scope: Deactivated successfully.
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.429831901 +0000 UTC m=+0.174194967 container died 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:36:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a7fa5673ae1c9753f35856a0b9c112562861d0580611cf8a7b5dfe3fd65b778b-merged.mount: Deactivated successfully.
Jan 22 09:36:23 np0005592157 podman[297908]: 2026-01-22 14:36:23.481999517 +0000 UTC m=+0.226362583 container remove 4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:36:23 np0005592157 systemd[1]: libpod-conmon-4cb1e3e7b9673ae6a5702812b391954a63720a5089380b75fe4fecd26a5c1099.scope: Deactivated successfully.
Jan 22 09:36:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:23.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:23 np0005592157 podman[297950]: 2026-01-22 14:36:23.699728566 +0000 UTC m=+0.060145005 container create 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:36:23 np0005592157 systemd[1]: Started libpod-conmon-97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9.scope.
Jan 22 09:36:23 np0005592157 podman[297950]: 2026-01-22 14:36:23.6797482 +0000 UTC m=+0.040164629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:36:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:36:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2131e0be4b119be094b2545836e1aa94aa73fd2b9feb1a77df3924acd05b5233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2131e0be4b119be094b2545836e1aa94aa73fd2b9feb1a77df3924acd05b5233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2131e0be4b119be094b2545836e1aa94aa73fd2b9feb1a77df3924acd05b5233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2131e0be4b119be094b2545836e1aa94aa73fd2b9feb1a77df3924acd05b5233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:36:23 np0005592157 podman[297950]: 2026-01-22 14:36:23.809322628 +0000 UTC m=+0.169739047 container init 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:36:23 np0005592157 podman[297950]: 2026-01-22 14:36:23.827465839 +0000 UTC m=+0.187882248 container start 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:36:23 np0005592157 podman[297950]: 2026-01-22 14:36:23.836613426 +0000 UTC m=+0.197029835 container attach 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:24 np0005592157 bold_colden[297966]: {
Jan 22 09:36:24 np0005592157 bold_colden[297966]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:36:24 np0005592157 bold_colden[297966]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:36:24 np0005592157 bold_colden[297966]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:36:24 np0005592157 bold_colden[297966]:        "osd_id": 0,
Jan 22 09:36:24 np0005592157 bold_colden[297966]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:36:24 np0005592157 bold_colden[297966]:        "type": "bluestore"
Jan 22 09:36:24 np0005592157 bold_colden[297966]:    }
Jan 22 09:36:24 np0005592157 bold_colden[297966]: }
Jan 22 09:36:24 np0005592157 systemd[1]: libpod-97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9.scope: Deactivated successfully.
Jan 22 09:36:24 np0005592157 podman[297950]: 2026-01-22 14:36:24.713131491 +0000 UTC m=+1.073547940 container died 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:36:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2131e0be4b119be094b2545836e1aa94aa73fd2b9feb1a77df3924acd05b5233-merged.mount: Deactivated successfully.
Jan 22 09:36:24 np0005592157 podman[297950]: 2026-01-22 14:36:24.786387651 +0000 UTC m=+1.146804060 container remove 97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:36:24 np0005592157 systemd[1]: libpod-conmon-97e3bd1577e7580f08595d1ebd84ce506eb5d3b0d847d392434642a7cedc4eb9.scope: Deactivated successfully.
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:36:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 50a4b6eb-9389-4136-9a19-e1926a9e8b43 does not exist
Jan 22 09:36:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7e587acf-a5f2-4a67-9eba-191c77c0c349 does not exist
Jan 22 09:36:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 567bdfb6-7d4a-424f-9fe5-04f6057553e5 does not exist
Jan 22 09:36:25 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:25.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:26 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:27.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:28 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:28 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:30 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:31 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:32.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:32 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:32.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:33 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:34.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:34 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:34 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:35 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:36.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:36.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:36 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:37 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:37 np0005592157 podman[298067]: 2026-01-22 14:36:37.390090748 +0000 UTC m=+0.115650584 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:36:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:38.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:38 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:39 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:40.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:40.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:40 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:41 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:36:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:36:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:42.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:42 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:43 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:43 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:44.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:44.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:44 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:44 np0005592157 podman[298142]: 2026-01-22 14:36:44.388237365 +0000 UTC m=+0.115513600 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 09:36:45 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:46 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:36:47 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:36:47
Jan 22 09:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'images', 'vms']
Jan 22 09:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:36:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:47.612 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:47.613 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:47.613 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:36:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:48 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:48.699 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:48 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:48.700 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:50.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:36:50.704 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:53 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:56.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:36:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:58.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.349378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618349536, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 889, "num_deletes": 251, "total_data_size": 1105056, "memory_usage": 1134816, "flush_reason": "Manual Compaction"}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618361575, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1077454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56794, "largest_seqno": 57682, "table_properties": {"data_size": 1073224, "index_size": 1818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10654, "raw_average_key_size": 20, "raw_value_size": 1064219, "raw_average_value_size": 2027, "num_data_blocks": 79, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092563, "oldest_key_time": 1769092563, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 12251 microseconds, and 6108 cpu microseconds.
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361691) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1077454 bytes OK
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361730) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.363880) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.363946) EVENT_LOG_v1 {"time_micros": 1769092618363908, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.363971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1100706, prev total WAL file size 1100706, number of live WAL files 2.
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.365001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1052KB)], [125(10MB)]
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618365128, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11955388, "oldest_snapshot_seqno": -1}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 10546 keys, 10380147 bytes, temperature: kUnknown
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618450885, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 10380147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10321029, "index_size": 31677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 286272, "raw_average_key_size": 27, "raw_value_size": 10140174, "raw_average_value_size": 961, "num_data_blocks": 1185, "num_entries": 10546, "num_filter_entries": 10546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.451664) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10380147 bytes
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.453583) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 120.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(20.7) write-amplify(9.6) OK, records in: 11061, records dropped: 515 output_compression: NoCompression
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.453622) EVENT_LOG_v1 {"time_micros": 1769092618453604, "job": 76, "event": "compaction_finished", "compaction_time_micros": 86166, "compaction_time_cpu_micros": 50421, "output_level": 6, "num_output_files": 1, "total_output_size": 10380147, "num_input_records": 11061, "num_output_records": 10546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618454357, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618458990, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.364776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.459042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.459049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.459052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.459055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:36:58.459058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:37:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:00.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 09:37:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:02.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 09:37:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:03 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:04.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:37:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2928284361622929 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:37:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:37:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:06.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 09:37:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:08.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 09:37:08 np0005592157 systemd[1]: Starting dnf makecache...
Jan 22 09:37:08 np0005592157 podman[298233]: 2026-01-22 14:37:08.381726066 +0000 UTC m=+0.095926734 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:37:08 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:08 np0005592157 dnf[298234]: Metadata cache refreshed recently.
Jan 22 09:37:08 np0005592157 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 09:37:08 np0005592157 systemd[1]: Finished dnf makecache.
Jan 22 09:37:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 09:37:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:37:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:37:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:12.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 09:37:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:12 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:14.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 09:37:15 np0005592157 podman[298254]: 2026-01-22 14:37:15.393230834 +0000 UTC m=+0.125831577 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:37:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:16.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:16.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:18.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:18 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:18 np0005592157 ceph-mon[74359]: Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:19 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:20.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:20.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:21 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:22.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:22.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:22 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:23 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:23 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:24.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:24 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:25 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:26.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:26.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 011666ad-40f1-47bb-b0cc-f0d9c17d7915 does not exist
Jan 22 09:37:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e21290fc-03c4-4794-92e7-d6b08440d31a does not exist
Jan 22 09:37:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 24440d74-e344-4de0-b5c9-98fd23d63da3 does not exist
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:37:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.699851123 +0000 UTC m=+0.070353358 container create 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 22 09:37:27 np0005592157 systemd[1]: Started libpod-conmon-4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0.scope.
Jan 22 09:37:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:27.751 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:37:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:27.752 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.666832443 +0000 UTC m=+0.037334718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.810086081 +0000 UTC m=+0.180588356 container init 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.822349476 +0000 UTC m=+0.192851681 container start 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.825879653 +0000 UTC m=+0.196381908 container attach 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:37:27 np0005592157 thirsty_gates[298625]: 167 167
Jan 22 09:37:27 np0005592157 systemd[1]: libpod-4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0.scope: Deactivated successfully.
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.834178659 +0000 UTC m=+0.204680864 container died 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:37:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9e3ffe105ac84a70d2622534c84de4e299841280b9a084666966cc5f4f1e7591-merged.mount: Deactivated successfully.
Jan 22 09:37:27 np0005592157 podman[298608]: 2026-01-22 14:37:27.886799207 +0000 UTC m=+0.257301442 container remove 4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:37:27 np0005592157 systemd[1]: libpod-conmon-4058de1f751b53d5542659dedb7cd804c43ddc7297941726c3e698ba32dc09c0.scope: Deactivated successfully.
Jan 22 09:37:28 np0005592157 podman[298649]: 2026-01-22 14:37:28.112078183 +0000 UTC m=+0.069727743 container create e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:37:28 np0005592157 systemd[1]: Started libpod-conmon-e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed.scope.
Jan 22 09:37:28 np0005592157 podman[298649]: 2026-01-22 14:37:28.083495713 +0000 UTC m=+0.041145303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:28 np0005592157 podman[298649]: 2026-01-22 14:37:28.237760005 +0000 UTC m=+0.195409555 container init e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:37:28 np0005592157 podman[298649]: 2026-01-22 14:37:28.257407623 +0000 UTC m=+0.215057173 container start e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:37:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:28 np0005592157 podman[298649]: 2026-01-22 14:37:28.261978307 +0000 UTC m=+0.219627857 container attach e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:37:28 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:28 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:29 np0005592157 romantic_mayer[298665]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:37:29 np0005592157 romantic_mayer[298665]: --> relative data size: 1.0
Jan 22 09:37:29 np0005592157 romantic_mayer[298665]: --> All data devices are unavailable
Jan 22 09:37:29 np0005592157 systemd[1]: libpod-e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed.scope: Deactivated successfully.
Jan 22 09:37:29 np0005592157 podman[298649]: 2026-01-22 14:37:29.144684745 +0000 UTC m=+1.102334335 container died e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:37:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4d8ba827bbfb3248da8d40eebe2e6930bc29edd5b032209dfbab7963d2346f30-merged.mount: Deactivated successfully.
Jan 22 09:37:29 np0005592157 podman[298649]: 2026-01-22 14:37:29.225137754 +0000 UTC m=+1.182787304 container remove e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mayer, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:37:29 np0005592157 systemd[1]: libpod-conmon-e2e95eb21ca1e30adebfa0374f34345ca3642b0014321cf39b5c1d51b262bbed.scope: Deactivated successfully.
Jan 22 09:37:29 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:30.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.380801703 +0000 UTC m=+0.043001520 container create 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:37:30 np0005592157 systemd[1]: Started libpod-conmon-57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0.scope.
Jan 22 09:37:30 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.452523474 +0000 UTC m=+0.114723291 container init 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.458466322 +0000 UTC m=+0.120666139 container start 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.363562424 +0000 UTC m=+0.025762261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.461833636 +0000 UTC m=+0.124033453 container attach 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:37:30 np0005592157 competent_diffie[298855]: 167 167
Jan 22 09:37:30 np0005592157 systemd[1]: libpod-57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0.scope: Deactivated successfully.
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.463704162 +0000 UTC m=+0.125903979 container died 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:37:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-738f985a1fdfb61ac40a9226c1b359bd3c688f123845b672b4dc69e8d1ad5444-merged.mount: Deactivated successfully.
Jan 22 09:37:30 np0005592157 podman[298839]: 2026-01-22 14:37:30.499221284 +0000 UTC m=+0.161421101 container remove 57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:37:30 np0005592157 systemd[1]: libpod-conmon-57aa4d04cb468361991fd8b6f8508c4d799309a2dbbbcaec1eb53743c34ecde0.scope: Deactivated successfully.
Jan 22 09:37:30 np0005592157 podman[298879]: 2026-01-22 14:37:30.687492612 +0000 UTC m=+0.054882655 container create db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:37:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592157 systemd[1]: Started libpod-conmon-db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7.scope.
Jan 22 09:37:30 np0005592157 podman[298879]: 2026-01-22 14:37:30.658811269 +0000 UTC m=+0.026201352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:30 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:30 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d71dfb63bf6d7a7b35bfb52b600279c09c5423c24fc9fca63707facc532a0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:30 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d71dfb63bf6d7a7b35bfb52b600279c09c5423c24fc9fca63707facc532a0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:30 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d71dfb63bf6d7a7b35bfb52b600279c09c5423c24fc9fca63707facc532a0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:30 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d71dfb63bf6d7a7b35bfb52b600279c09c5423c24fc9fca63707facc532a0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:30 np0005592157 podman[298879]: 2026-01-22 14:37:30.794847718 +0000 UTC m=+0.162237731 container init db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:37:30 np0005592157 podman[298879]: 2026-01-22 14:37:30.808889357 +0000 UTC m=+0.176279400 container start db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:37:30 np0005592157 podman[298879]: 2026-01-22 14:37:30.813702547 +0000 UTC m=+0.181092570 container attach db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]: {
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:    "0": [
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:        {
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "devices": [
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "/dev/loop3"
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            ],
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "lv_name": "ceph_lv0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "lv_size": "7511998464",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "name": "ceph_lv0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "tags": {
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.cluster_name": "ceph",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.crush_device_class": "",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.encrypted": "0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.osd_id": "0",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.type": "block",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:                "ceph.vdo": "0"
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            },
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "type": "block",
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:            "vg_name": "ceph_vg0"
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:        }
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]:    ]
Jan 22 09:37:31 np0005592157 festive_wescoff[298896]: }
Jan 22 09:37:31 np0005592157 systemd[1]: libpod-db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7.scope: Deactivated successfully.
Jan 22 09:37:31 np0005592157 podman[298879]: 2026-01-22 14:37:31.618208031 +0000 UTC m=+0.985598034 container died db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:37:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d3d71dfb63bf6d7a7b35bfb52b600279c09c5423c24fc9fca63707facc532a0f-merged.mount: Deactivated successfully.
Jan 22 09:37:31 np0005592157 podman[298879]: 2026-01-22 14:37:31.704767202 +0000 UTC m=+1.072157205 container remove db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:37:31 np0005592157 systemd[1]: libpod-conmon-db85b58758bd869b2e02d89efdbc0faef576de5c2ea1a029bdc2128c70a1dbd7.scope: Deactivated successfully.
Jan 22 09:37:31 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:32.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:32.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.440484168 +0000 UTC m=+0.070721938 container create 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:37:32 np0005592157 systemd[1]: Started libpod-conmon-3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1.scope.
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.408981616 +0000 UTC m=+0.039219456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.545164949 +0000 UTC m=+0.175402789 container init 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.558242964 +0000 UTC m=+0.188480694 container start 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.562224733 +0000 UTC m=+0.192462473 container attach 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 09:37:32 np0005592157 fervent_sammet[299076]: 167 167
Jan 22 09:37:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:32 np0005592157 systemd[1]: libpod-3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1.scope: Deactivated successfully.
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.56737121 +0000 UTC m=+0.197608950 container died 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:37:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c634264022cf1fabaa6b2d1e0f9e850147c0e53c759085de6ff8bc0fa5c110ff-merged.mount: Deactivated successfully.
Jan 22 09:37:32 np0005592157 podman[299059]: 2026-01-22 14:37:32.620714426 +0000 UTC m=+0.250952196 container remove 3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:37:32 np0005592157 systemd[1]: libpod-conmon-3bbf264a4f09530c378fd43f50c4d0b8116df116e7623a44523e4aeeae1698b1.scope: Deactivated successfully.
Jan 22 09:37:32 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:32 np0005592157 podman[299100]: 2026-01-22 14:37:32.864603644 +0000 UTC m=+0.059601541 container create 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:37:32 np0005592157 systemd[1]: Started libpod-conmon-1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f.scope.
Jan 22 09:37:32 np0005592157 podman[299100]: 2026-01-22 14:37:32.835374948 +0000 UTC m=+0.030372905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:37:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:37:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0dcdaa48ca91ea9f2e12eeb3a775e1dc907994a6ad3c135f200716b6147f62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0dcdaa48ca91ea9f2e12eeb3a775e1dc907994a6ad3c135f200716b6147f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0dcdaa48ca91ea9f2e12eeb3a775e1dc907994a6ad3c135f200716b6147f62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0dcdaa48ca91ea9f2e12eeb3a775e1dc907994a6ad3c135f200716b6147f62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:37:32 np0005592157 podman[299100]: 2026-01-22 14:37:32.96789702 +0000 UTC m=+0.162894937 container init 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:37:32 np0005592157 podman[299100]: 2026-01-22 14:37:32.979303614 +0000 UTC m=+0.174301511 container start 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:37:32 np0005592157 podman[299100]: 2026-01-22 14:37:32.983916868 +0000 UTC m=+0.178914825 container attach 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:37:33 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]: {
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:        "osd_id": 0,
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:        "type": "bluestore"
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]:    }
Jan 22 09:37:33 np0005592157 exciting_neumann[299117]: }
Jan 22 09:37:33 np0005592157 systemd[1]: libpod-1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f.scope: Deactivated successfully.
Jan 22 09:37:33 np0005592157 podman[299139]: 2026-01-22 14:37:33.983323846 +0000 UTC m=+0.036139159 container died 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:37:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-af0dcdaa48ca91ea9f2e12eeb3a775e1dc907994a6ad3c135f200716b6147f62-merged.mount: Deactivated successfully.
Jan 22 09:37:34 np0005592157 podman[299139]: 2026-01-22 14:37:34.038373523 +0000 UTC m=+0.091188606 container remove 1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:37:34 np0005592157 systemd[1]: libpod-conmon-1d600c0b2b19bbf813755baa28c5b2897d1b999a99ebed3ec7670ec1b66fb17f.scope: Deactivated successfully.
Jan 22 09:37:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:37:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:37:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 467215f6-bec7-4311-bbbc-180050dbb806 does not exist
Jan 22 09:37:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5b01aa4f-a368-45c7-bcde-0f8bf3bc3b73 does not exist
Jan 22 09:37:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ff4c4c73-7b72-4f7a-a0a9-17b94fe54a43 does not exist
Jan 22 09:37:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:34.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:34.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:36 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:36.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:36.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:36 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:36.756 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:37:37 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:38 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:38.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:38.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:39 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:39 np0005592157 podman[299206]: 2026-01-22 14:37:39.375459794 +0000 UTC m=+0.097965334 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:37:40 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:40.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:41 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:42.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:37:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:42.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:37:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:42 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:43 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:43 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:44.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:44.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:44 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:45 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 22 09:37:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:46.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 22 09:37:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:46.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:46 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:46 np0005592157 podman[299279]: 2026-01-22 14:37:46.427343166 +0000 UTC m=+0.151790672 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:37:47
Jan 22 09:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms']
Jan 22 09:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:37:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:47.614 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:47.614 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:37:47.614 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:37:47 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:48.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:49 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:50.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:50 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:51 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:52.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:52 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:53 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:54 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:55 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:56 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:58 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:58 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:58.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:37:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:37:59 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:00 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:00.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:00.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:01 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:02 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:03 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:03 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:38:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:04.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:38:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:38:04 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2928284361622929 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:38:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:38:05 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:06.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:06 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:08.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:08 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:08 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:09 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592157 podman[299368]: 2026-01-22 14:38:10.112366577 +0000 UTC m=+0.080789028 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:38:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:10 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:11 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:12 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:13 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:14.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:14 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:15 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:16 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592157 podman[299390]: 2026-01-22 14:38:17.360281738 +0000 UTC m=+0.093262658 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:38:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:17 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:18 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:19 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:20.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:20.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:21 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:21.327 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:38:21 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:21.328 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:38:21 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:22.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:22 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:23 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:24 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:25 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:26.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:26.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:26 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:27 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:28 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:31.331 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:38:31 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:32.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:32 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:33 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:33 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:34 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:38:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:38:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:38:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:36 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:38 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:39 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:40 np0005592157 podman[299607]: 2026-01-22 14:38:40.365057866 +0000 UTC m=+0.091498634 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:38:40 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2319ff46-4d6a-4284-98c9-c8ad73ca61c5 does not exist
Jan 22 09:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bc364924-fa01-4ef4-838d-f79d6c371d0f does not exist
Jan 22 09:38:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5b729b37-7d1b-4903-a039-848364fbbc76 does not exist
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.589239337 +0000 UTC m=+0.039559824 container create a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:38:41 np0005592157 systemd[1]: Started libpod-conmon-a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148.scope.
Jan 22 09:38:41 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.662295552 +0000 UTC m=+0.112616059 container init a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.668053605 +0000 UTC m=+0.118374092 container start a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.572092821 +0000 UTC m=+0.022413338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.671528841 +0000 UTC m=+0.121849318 container attach a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:38:41 np0005592157 festive_williamson[299780]: 167 167
Jan 22 09:38:41 np0005592157 systemd[1]: libpod-a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148.scope: Deactivated successfully.
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.67510111 +0000 UTC m=+0.125421597 container died a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:38:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7d87daa437dc527ad2140e24d8d16466044c7e9eb8071f93dfaed77f1ba0284c-merged.mount: Deactivated successfully.
Jan 22 09:38:41 np0005592157 podman[299763]: 2026-01-22 14:38:41.712427237 +0000 UTC m=+0.162747724 container remove a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:38:41 np0005592157 systemd[1]: libpod-conmon-a4822684aa43ac82c5fc8551838773ffd96dd1383f3d130b23f0a1b5875c8148.scope: Deactivated successfully.
Jan 22 09:38:41 np0005592157 podman[299806]: 2026-01-22 14:38:41.891536017 +0000 UTC m=+0.045614124 container create f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:38:41 np0005592157 systemd[1]: Started libpod-conmon-f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693.scope.
Jan 22 09:38:41 np0005592157 podman[299806]: 2026-01-22 14:38:41.868928355 +0000 UTC m=+0.023006542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:41 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:41 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:41 np0005592157 podman[299806]: 2026-01-22 14:38:41.997300044 +0000 UTC m=+0.151378151 container init f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:42 np0005592157 podman[299806]: 2026-01-22 14:38:42.009903777 +0000 UTC m=+0.163981864 container start f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:38:42 np0005592157 podman[299806]: 2026-01-22 14:38:42.013053906 +0000 UTC m=+0.167132023 container attach f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:38:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:42.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:42 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:42 np0005592157 tender_wright[299823]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:38:42 np0005592157 tender_wright[299823]: --> relative data size: 1.0
Jan 22 09:38:42 np0005592157 tender_wright[299823]: --> All data devices are unavailable
Jan 22 09:38:42 np0005592157 systemd[1]: libpod-f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693.scope: Deactivated successfully.
Jan 22 09:38:42 np0005592157 podman[299806]: 2026-01-22 14:38:42.796746394 +0000 UTC m=+0.950824491 container died f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:38:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5be0283c58a10195cee34a4eba244b391d9e7330446909efbcc8d818dfb7ff0-merged.mount: Deactivated successfully.
Jan 22 09:38:42 np0005592157 podman[299806]: 2026-01-22 14:38:42.857251657 +0000 UTC m=+1.011329754 container remove f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:38:42 np0005592157 systemd[1]: libpod-conmon-f836098003633ba50db65d401a0d3d0dd5a9c9bfc57b1738179b0a1ac8bb0693.scope: Deactivated successfully.
Jan 22 09:38:43 np0005592157 podman[299991]: 2026-01-22 14:38:43.534649514 +0000 UTC m=+0.065728084 container create 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:38:43 np0005592157 systemd[1]: Started libpod-conmon-4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b.scope.
Jan 22 09:38:43 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:43 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:43 np0005592157 podman[299991]: 2026-01-22 14:38:43.506000182 +0000 UTC m=+0.037078812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:43 np0005592157 podman[299991]: 2026-01-22 14:38:43.631734216 +0000 UTC m=+0.162812776 container init 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:38:43 np0005592157 podman[299991]: 2026-01-22 14:38:43.639484109 +0000 UTC m=+0.170562689 container start 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 09:38:43 np0005592157 podman[299991]: 2026-01-22 14:38:43.643716424 +0000 UTC m=+0.174795024 container attach 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:38:43 np0005592157 angry_yonath[300009]: 167 167
Jan 22 09:38:43 np0005592157 systemd[1]: libpod-4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b.scope: Deactivated successfully.
Jan 22 09:38:43 np0005592157 podman[300014]: 2026-01-22 14:38:43.681594085 +0000 UTC m=+0.025337611 container died 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:38:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cac6aaa6de7ce91fb5b127e507c4fc54be0f69d6afca8e13a6bdb00a7d45b039-merged.mount: Deactivated successfully.
Jan 22 09:38:43 np0005592157 podman[300014]: 2026-01-22 14:38:43.717020795 +0000 UTC m=+0.060764301 container remove 4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:38:43 np0005592157 systemd[1]: libpod-conmon-4417f2b024640400f85e32cef4d738d023d089e9a1fd647874962483739e146b.scope: Deactivated successfully.
Jan 22 09:38:43 np0005592157 podman[300036]: 2026-01-22 14:38:43.906073611 +0000 UTC m=+0.062496973 container create 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:38:43 np0005592157 systemd[1]: Started libpod-conmon-5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e.scope.
Jan 22 09:38:43 np0005592157 podman[300036]: 2026-01-22 14:38:43.86817648 +0000 UTC m=+0.024599892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b093f577d8855ef2835f29c8607aa9f36a41530cc1824f0363f8871693653f6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b093f577d8855ef2835f29c8607aa9f36a41530cc1824f0363f8871693653f6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b093f577d8855ef2835f29c8607aa9f36a41530cc1824f0363f8871693653f6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b093f577d8855ef2835f29c8607aa9f36a41530cc1824f0363f8871693653f6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:44 np0005592157 podman[300036]: 2026-01-22 14:38:44.003151523 +0000 UTC m=+0.159574895 container init 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:38:44 np0005592157 podman[300036]: 2026-01-22 14:38:44.016076134 +0000 UTC m=+0.172499496 container start 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:38:44 np0005592157 podman[300036]: 2026-01-22 14:38:44.020441582 +0000 UTC m=+0.176864914 container attach 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:38:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:44 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]: {
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:    "0": [
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:        {
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "devices": [
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "/dev/loop3"
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            ],
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "lv_name": "ceph_lv0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "lv_size": "7511998464",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "name": "ceph_lv0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "tags": {
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.cluster_name": "ceph",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.crush_device_class": "",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.encrypted": "0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.osd_id": "0",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.type": "block",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:                "ceph.vdo": "0"
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            },
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "type": "block",
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:            "vg_name": "ceph_vg0"
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:        }
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]:    ]
Jan 22 09:38:44 np0005592157 vigorous_heisenberg[300053]: }
Jan 22 09:38:44 np0005592157 systemd[1]: libpod-5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e.scope: Deactivated successfully.
Jan 22 09:38:44 np0005592157 podman[300036]: 2026-01-22 14:38:44.782680978 +0000 UTC m=+0.939104310 container died 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:38:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b093f577d8855ef2835f29c8607aa9f36a41530cc1824f0363f8871693653f6c-merged.mount: Deactivated successfully.
Jan 22 09:38:44 np0005592157 podman[300036]: 2026-01-22 14:38:44.845115879 +0000 UTC m=+1.001539201 container remove 5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:38:44 np0005592157 systemd[1]: libpod-conmon-5d625bed924c19c19e638064c60f9d5eef02bfe4a6111460d249feb2524a423e.scope: Deactivated successfully.
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.513537544 +0000 UTC m=+0.039619345 container create 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:38:45 np0005592157 systemd[1]: Started libpod-conmon-091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea.scope.
Jan 22 09:38:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.495801513 +0000 UTC m=+0.021883324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.647349848 +0000 UTC m=+0.173431659 container init 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:38:45 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.661357296 +0000 UTC m=+0.187439087 container start 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.665116909 +0000 UTC m=+0.191198740 container attach 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:38:45 np0005592157 beautiful_cohen[300279]: 167 167
Jan 22 09:38:45 np0005592157 systemd[1]: libpod-091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea.scope: Deactivated successfully.
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.6671779 +0000 UTC m=+0.193259691 container died 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:38:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-035cb3420998670b5b82445184277322782980e4b4ca56f9cb45a124867db236-merged.mount: Deactivated successfully.
Jan 22 09:38:45 np0005592157 podman[300262]: 2026-01-22 14:38:45.714819544 +0000 UTC m=+0.240901355 container remove 091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:45 np0005592157 systemd[1]: libpod-conmon-091f1dbb776806c4ea81f9edd4ded482ae17d997b119145632610427961a34ea.scope: Deactivated successfully.
Jan 22 09:38:45 np0005592157 podman[300303]: 2026-01-22 14:38:45.889130024 +0000 UTC m=+0.050354602 container create 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:38:45 np0005592157 systemd[1]: Started libpod-conmon-829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d.scope.
Jan 22 09:38:45 np0005592157 podman[300303]: 2026-01-22 14:38:45.860962475 +0000 UTC m=+0.022187233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4669cc59b5089b2abae1b885d64c2d7825d5db84a934957446a5a9f74ead0063/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4669cc59b5089b2abae1b885d64c2d7825d5db84a934957446a5a9f74ead0063/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4669cc59b5089b2abae1b885d64c2d7825d5db84a934957446a5a9f74ead0063/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4669cc59b5089b2abae1b885d64c2d7825d5db84a934957446a5a9f74ead0063/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:45 np0005592157 podman[300303]: 2026-01-22 14:38:45.986005401 +0000 UTC m=+0.147230029 container init 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:38:45 np0005592157 podman[300303]: 2026-01-22 14:38:45.998966283 +0000 UTC m=+0.160190891 container start 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:38:46 np0005592157 podman[300303]: 2026-01-22 14:38:46.005333831 +0000 UTC m=+0.166558419 container attach 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:38:46 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]: {
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:        "osd_id": 0,
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:        "type": "bluestore"
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]:    }
Jan 22 09:38:46 np0005592157 admiring_ardinghelli[300319]: }
Jan 22 09:38:46 np0005592157 systemd[1]: libpod-829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d.scope: Deactivated successfully.
Jan 22 09:38:46 np0005592157 podman[300303]: 2026-01-22 14:38:46.826489349 +0000 UTC m=+0.987713947 container died 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:38:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4669cc59b5089b2abae1b885d64c2d7825d5db84a934957446a5a9f74ead0063-merged.mount: Deactivated successfully.
Jan 22 09:38:46 np0005592157 podman[300303]: 2026-01-22 14:38:46.881994638 +0000 UTC m=+1.043219216 container remove 829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:38:46 np0005592157 systemd[1]: libpod-conmon-829477c6cfc144e3adf749fd31fb57057bf109e1338ef216899b9b1cad2d884d.scope: Deactivated successfully.
Jan 22 09:38:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:38:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:38:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3429fc7a-a271-413a-a80c-861d2050fd4c does not exist
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bb633bd2-dbc3-4bab-a1d1-64e487cc04ba does not exist
Jan 22 09:38:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev afc343ba-77d0-45f4-876a-d07548033167 does not exist
Jan 22 09:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:38:47
Jan 22 09:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Jan 22 09:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:47.614 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:47.616 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:38:47.616 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:48 np0005592157 podman[300402]: 2026-01-22 14:38:48.368777272 +0000 UTC m=+0.108345692 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 22 09:38:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:49 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:50 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:51 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:52 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:53 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:38:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:54.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:38:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:54 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:55 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:56.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:56 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:57 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:38:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:38:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:38:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:38:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:58.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:59 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:00 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:01 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:02 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:03 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:03 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:04 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2928284361622929 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:39:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:39:05 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:06 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:06.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:06.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:07 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:08 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:08 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:08.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:09 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:10 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:11 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:11 np0005592157 podman[300489]: 2026-01-22 14:39:11.343983347 +0000 UTC m=+0.083748171 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:39:12 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:13 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:13 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:14 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:15 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:16 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:17 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:18 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:18 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:19 np0005592157 podman[300512]: 2026-01-22 14:39:19.372869338 +0000 UTC m=+0.107514532 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:39:19 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:20.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:22.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:22 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:23.127 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:39:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:23.129 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:39:23 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:23 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:24.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:24 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:25 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:26 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:28.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:28 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:28 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:29 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:30.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:39:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:39:31 np0005592157 ceph-mon[74359]: 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:39:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:39:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:39:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:32.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:32 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:32 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:33 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:33.131 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:39:33 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 739 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 22 09:39:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:34.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:35 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:36 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:39:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:37 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:38 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:39:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:39 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:40 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:39:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:40.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:41 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:42 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:39:42 np0005592157 podman[300606]: 2026-01-22 14:39:42.349283923 +0000 UTC m=+0.071907447 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:39:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:42.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:43 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:43 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:44 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:39:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:44.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:44.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.270350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785270438, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2189, "num_deletes": 251, "total_data_size": 3256642, "memory_usage": 3324144, "flush_reason": "Manual Compaction"}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785298167, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3171351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57683, "largest_seqno": 59871, "table_properties": {"data_size": 3162263, "index_size": 5325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22746, "raw_average_key_size": 21, "raw_value_size": 3142323, "raw_average_value_size": 2939, "num_data_blocks": 228, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092619, "oldest_key_time": 1769092619, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 27917 microseconds, and 12671 cpu microseconds.
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.298264) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3171351 bytes OK
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.298294) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.300350) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.300370) EVENT_LOG_v1 {"time_micros": 1769092785300363, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.300395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3247488, prev total WAL file size 3247488, number of live WAL files 2.
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301877) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3097KB)], [128(10136KB)]
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785301995, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 13551498, "oldest_snapshot_seqno": -1}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 11096 keys, 11910855 bytes, temperature: kUnknown
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785399612, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11910855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11847241, "index_size": 34765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 299340, "raw_average_key_size": 26, "raw_value_size": 11655651, "raw_average_value_size": 1050, "num_data_blocks": 1311, "num_entries": 11096, "num_filter_entries": 11096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.400074) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11910855 bytes
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.401895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.6 rd, 121.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11615, records dropped: 519 output_compression: NoCompression
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.401953) EVENT_LOG_v1 {"time_micros": 1769092785401911, "job": 78, "event": "compaction_finished", "compaction_time_micros": 97751, "compaction_time_cpu_micros": 53170, "output_level": 6, "num_output_files": 1, "total_output_size": 11910855, "num_input_records": 11615, "num_output_records": 11096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785403309, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785407255, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.407392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.407403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.407406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.407409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:39:45.407412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:46 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 09:39:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:46.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:39:47
Jan 22 09:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'vms', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Jan 22 09:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:39:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:47.615 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:47.615 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:39:47.615 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:39:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:48.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:39:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 21223496-1977-4bf9-b675-962b976b2a67 does not exist
Jan 22 09:39:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 662cede8-582b-414a-bfda-5321f1631ab0 does not exist
Jan 22 09:39:48 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0068d1ab-d347-45a1-a979-2b2195db7deb does not exist
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:39:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.168798163 +0000 UTC m=+0.062160785 container create 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 22 09:39:49 np0005592157 systemd[1]: Started libpod-conmon-82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2.scope.
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.14612285 +0000 UTC m=+0.039485482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.265972967 +0000 UTC m=+0.159335649 container init 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.277982726 +0000 UTC m=+0.171345348 container start 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.282514328 +0000 UTC m=+0.175877010 container attach 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:39:49 np0005592157 thirsty_panini[300966]: 167 167
Jan 22 09:39:49 np0005592157 systemd[1]: libpod-82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2.scope: Deactivated successfully.
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.287240856 +0000 UTC m=+0.180603448 container died 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:39:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-932cd79df0a7a5f2132f48e7a68ed831b8a06aadf3620851c4b519df0abd8007-merged.mount: Deactivated successfully.
Jan 22 09:39:49 np0005592157 podman[300949]: 2026-01-22 14:39:49.343333869 +0000 UTC m=+0.236696491 container remove 82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:39:49 np0005592157 systemd[1]: libpod-conmon-82057f8e3d8d448508041b09dcf61706c8d0253e62d8b404b7aae66f2d8188b2.scope: Deactivated successfully.
Jan 22 09:39:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:39:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:39:49 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:49 np0005592157 podman[300988]: 2026-01-22 14:39:49.596079938 +0000 UTC m=+0.081114676 container create a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:39:49 np0005592157 podman[300988]: 2026-01-22 14:39:49.547595083 +0000 UTC m=+0.032629831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:49 np0005592157 systemd[1]: Started libpod-conmon-a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17.scope.
Jan 22 09:39:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:49 np0005592157 podman[301003]: 2026-01-22 14:39:49.774784177 +0000 UTC m=+0.134618845 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:39:49 np0005592157 podman[300988]: 2026-01-22 14:39:49.790298273 +0000 UTC m=+0.275333061 container init a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:39:49 np0005592157 podman[300988]: 2026-01-22 14:39:49.806567757 +0000 UTC m=+0.291602495 container start a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:39:49 np0005592157 podman[300988]: 2026-01-22 14:39:49.810533995 +0000 UTC m=+0.295568773 container attach a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:39:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:50 np0005592157 zen_clarke[301021]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:39:50 np0005592157 zen_clarke[301021]: --> relative data size: 1.0
Jan 22 09:39:50 np0005592157 zen_clarke[301021]: --> All data devices are unavailable
Jan 22 09:39:50 np0005592157 systemd[1]: libpod-a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17.scope: Deactivated successfully.
Jan 22 09:39:50 np0005592157 podman[301045]: 2026-01-22 14:39:50.67859093 +0000 UTC m=+0.035079323 container died a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:39:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dfbbcbb0e2aa8b687c4a17919f2623df3a9fba4e1a95b7951e95181fb777b932-merged.mount: Deactivated successfully.
Jan 22 09:39:50 np0005592157 podman[301045]: 2026-01-22 14:39:50.750219099 +0000 UTC m=+0.106707452 container remove a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:50 np0005592157 systemd[1]: libpod-conmon-a4726a8c4aebefc7be1e19219bb7ee892709c13655fd14dd087f991189e6cf17.scope: Deactivated successfully.
Jan 22 09:39:51 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.629686596 +0000 UTC m=+0.054265679 container create 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:39:51 np0005592157 systemd[1]: Started libpod-conmon-5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4.scope.
Jan 22 09:39:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.611293769 +0000 UTC m=+0.035872882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.71236848 +0000 UTC m=+0.136947653 container init 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.71962628 +0000 UTC m=+0.144205383 container start 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.724382768 +0000 UTC m=+0.148961931 container attach 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:39:51 np0005592157 sleepy_lichterman[301217]: 167 167
Jan 22 09:39:51 np0005592157 systemd[1]: libpod-5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4.scope: Deactivated successfully.
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.727300371 +0000 UTC m=+0.151879484 container died 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:39:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fd2e61dfb4910e9f20e03a3248081aed5f49dbb32dc8750c9672df4c266c3d2b-merged.mount: Deactivated successfully.
Jan 22 09:39:51 np0005592157 podman[301201]: 2026-01-22 14:39:51.779160769 +0000 UTC m=+0.203739872 container remove 5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:39:51 np0005592157 systemd[1]: libpod-conmon-5add85898291f53f42dd673508d867354fdbc48a1377f1807d7b22f21e271ab4.scope: Deactivated successfully.
Jan 22 09:39:51 np0005592157 podman[301241]: 2026-01-22 14:39:51.985085855 +0000 UTC m=+0.061662613 container create bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:39:52 np0005592157 systemd[1]: Started libpod-conmon-bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6.scope.
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:51.956349241 +0000 UTC m=+0.032926069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2ea7a9680590bfe03953ed0f6998f01d46c1299889200995d41b8e0f1fd17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2ea7a9680590bfe03953ed0f6998f01d46c1299889200995d41b8e0f1fd17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2ea7a9680590bfe03953ed0f6998f01d46c1299889200995d41b8e0f1fd17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2ea7a9680590bfe03953ed0f6998f01d46c1299889200995d41b8e0f1fd17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:52.114716895 +0000 UTC m=+0.191293723 container init bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:52.121170075 +0000 UTC m=+0.197746843 container start bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:52.125004561 +0000 UTC m=+0.201581399 container attach bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:52 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]: {
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:    "0": [
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:        {
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "devices": [
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "/dev/loop3"
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            ],
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "lv_name": "ceph_lv0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "lv_size": "7511998464",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "name": "ceph_lv0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "tags": {
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.cluster_name": "ceph",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.crush_device_class": "",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.encrypted": "0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.osd_id": "0",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.type": "block",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:                "ceph.vdo": "0"
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            },
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "type": "block",
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:            "vg_name": "ceph_vg0"
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:        }
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]:    ]
Jan 22 09:39:52 np0005592157 hopeful_golick[301257]: }
Jan 22 09:39:52 np0005592157 systemd[1]: libpod-bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6.scope: Deactivated successfully.
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:52.891254846 +0000 UTC m=+0.967831654 container died bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2ca2ea7a9680590bfe03953ed0f6998f01d46c1299889200995d41b8e0f1fd17-merged.mount: Deactivated successfully.
Jan 22 09:39:52 np0005592157 podman[301241]: 2026-01-22 14:39:52.973360585 +0000 UTC m=+1.049937333 container remove bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_golick, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:39:52 np0005592157 systemd[1]: libpod-conmon-bf56b48b55fa7492ee4ed76699f6f55e9cac77928ba9c19b999416f08b1932e6.scope: Deactivated successfully.
Jan 22 09:39:53 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:53 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.711573594 +0000 UTC m=+0.053813358 container create 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:39:53 np0005592157 systemd[1]: Started libpod-conmon-58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e.scope.
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.681841325 +0000 UTC m=+0.024081149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.804744319 +0000 UTC m=+0.146984133 container init 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.816995163 +0000 UTC m=+0.159234917 container start 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.821199267 +0000 UTC m=+0.163439021 container attach 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:53 np0005592157 sharp_mendel[301438]: 167 167
Jan 22 09:39:53 np0005592157 systemd[1]: libpod-58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e.scope: Deactivated successfully.
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.825224377 +0000 UTC m=+0.167464141 container died 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:39:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e299216f07eb1d8af8bf1a17837def0e219962aa0ac068a8f11c667fe4e10f37-merged.mount: Deactivated successfully.
Jan 22 09:39:53 np0005592157 podman[301422]: 2026-01-22 14:39:53.874496591 +0000 UTC m=+0.216736355 container remove 58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:39:53 np0005592157 systemd[1]: libpod-conmon-58e064e1d70d1b508fd55c930799844f440cb1fe55e50f75c5a34c5a0d61ea8e.scope: Deactivated successfully.
Jan 22 09:39:54 np0005592157 podman[301462]: 2026-01-22 14:39:54.101368927 +0000 UTC m=+0.060715609 container create d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:39:54 np0005592157 systemd[1]: Started libpod-conmon-d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4.scope.
Jan 22 09:39:54 np0005592157 podman[301462]: 2026-01-22 14:39:54.08095957 +0000 UTC m=+0.040306292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:39:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:39:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58aeae85983dd960695e8f8daeaf11dfeb6fa72e691af162041caa22146d0b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58aeae85983dd960695e8f8daeaf11dfeb6fa72e691af162041caa22146d0b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58aeae85983dd960695e8f8daeaf11dfeb6fa72e691af162041caa22146d0b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58aeae85983dd960695e8f8daeaf11dfeb6fa72e691af162041caa22146d0b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:39:54 np0005592157 podman[301462]: 2026-01-22 14:39:54.202395327 +0000 UTC m=+0.161742029 container init d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:39:54 np0005592157 podman[301462]: 2026-01-22 14:39:54.215537774 +0000 UTC m=+0.174884476 container start d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:39:54 np0005592157 podman[301462]: 2026-01-22 14:39:54.219707297 +0000 UTC m=+0.179054019 container attach d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:39:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:39:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:39:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:54 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]: {
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:        "osd_id": 0,
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:        "type": "bluestore"
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]:    }
Jan 22 09:39:55 np0005592157 compassionate_morse[301478]: }
Jan 22 09:39:55 np0005592157 systemd[1]: libpod-d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4.scope: Deactivated successfully.
Jan 22 09:39:55 np0005592157 podman[301462]: 2026-01-22 14:39:55.157569945 +0000 UTC m=+1.116916657 container died d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:39:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e58aeae85983dd960695e8f8daeaf11dfeb6fa72e691af162041caa22146d0b8-merged.mount: Deactivated successfully.
Jan 22 09:39:55 np0005592157 podman[301462]: 2026-01-22 14:39:55.219217906 +0000 UTC m=+1.178564618 container remove d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:39:55 np0005592157 systemd[1]: libpod-conmon-d3a3f1720445be719d1823f9a6ebedbb50c1c44c3b7d5f8d2a53c37dec3317e4.scope: Deactivated successfully.
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 33eb582a-1391-4473-87b2-7c3c309a1b18 does not exist
Jan 22 09:39:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7285bfc9-771d-4718-b5b5-7f5af769adf7 does not exist
Jan 22 09:39:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1d84d5cc-c286-4d45-a23f-1db93d55bf87 does not exist
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:56 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:56 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:39:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:39:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:58 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:58 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:59 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:01 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:02 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:03 np0005592157 ovn_controller[146940]: 2026-01-22T14:40:03Z|00068|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:40:03 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:03 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:04.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:40:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:40:04 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:05 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:06.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 22 09:40:07 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 22 09:40:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 22 09:40:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:08 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:08 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:09 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:10 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 09:40:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:12.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 22 09:40:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 22 09:40:13 np0005592157 podman[301620]: 2026-01-22 14:40:13.316659599 +0000 UTC m=+0.053963172 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:40:13 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:13 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 09:40:14 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:15 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 22 09:40:16 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:16.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:17 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 09:40:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:18.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:18.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:18 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:18 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:19 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:20 np0005592157 podman[301644]: 2026-01-22 14:40:20.441722096 +0000 UTC m=+0.165980514 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:40:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:20.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:20.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:21 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:40:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:22.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:40:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:22 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:24 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:25 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:26.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:26.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:26.831 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:40:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:26.832 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:40:27 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:28 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:28 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:28.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:28.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:29 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:30.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:31 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:32 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:33 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:33 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:34 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:34.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:35 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:36 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:36.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:36 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:36.834 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.643397) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837643588, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 892, "num_deletes": 256, "total_data_size": 1024621, "memory_usage": 1040264, "flush_reason": "Manual Compaction"}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837651522, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1008549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59872, "largest_seqno": 60763, "table_properties": {"data_size": 1004293, "index_size": 1779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10872, "raw_average_key_size": 20, "raw_value_size": 995023, "raw_average_value_size": 1842, "num_data_blocks": 77, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092786, "oldest_key_time": 1769092786, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 8215 microseconds, and 3695 cpu microseconds.
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.651618) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1008549 bytes OK
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.651647) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.653773) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.653797) EVENT_LOG_v1 {"time_micros": 1769092837653789, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.653820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1020197, prev total WAL file size 1020197, number of live WAL files 2.
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.654758) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(984KB)], [131(11MB)]
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837654891, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 12919404, "oldest_snapshot_seqno": -1}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 11107 keys, 12766767 bytes, temperature: kUnknown
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837737253, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 12766767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12702055, "index_size": 35863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 300925, "raw_average_key_size": 27, "raw_value_size": 12509144, "raw_average_value_size": 1126, "num_data_blocks": 1353, "num_entries": 11107, "num_filter_entries": 11107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.737524) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 12766767 bytes
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.739065) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.7 rd, 154.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 11636, records dropped: 529 output_compression: NoCompression
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.739096) EVENT_LOG_v1 {"time_micros": 1769092837739082, "job": 80, "event": "compaction_finished", "compaction_time_micros": 82433, "compaction_time_cpu_micros": 47901, "output_level": 6, "num_output_files": 1, "total_output_size": 12766767, "num_input_records": 11636, "num_output_records": 11107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837739517, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837744043, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.654633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.744196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.744205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.744209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.744213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:40:37.744217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:39 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:40:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:40:41 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:40:42 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:42.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 22 09:40:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 22 09:40:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 22 09:40:43 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:43 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:44 np0005592157 podman[301733]: 2026-01-22 14:40:44.355871378 +0000 UTC m=+0.080896910 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:40:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Jan 22 09:40:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:44.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:44 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:45 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 1.4 KiB/s wr, 11 op/s
Jan 22 09:40:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:40:46 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 22 09:40:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 22 09:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:40:47
Jan 22 09:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Jan 22 09:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:40:47 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:47.616 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:47.616 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:40:47.616 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:40:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Jan 22 09:40:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:48.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:48.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:48 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:48 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:49 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Jan 22 09:40:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:50.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:50.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:50 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:51 np0005592157 podman[301807]: 2026-01-22 14:40:51.396451001 +0000 UTC m=+0.126210817 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:40:51 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 44 op/s
Jan 22 09:40:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:52.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 22 09:40:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 22 09:40:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 22 09:40:53 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:53 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Jan 22 09:40:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:40:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:40:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:54.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:54 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:54 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Jan 22 09:40:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:56.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:40:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:40:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:40:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:40:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:40:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 66354e9c-2d3b-409d-adef-c90fd7bb2a39 does not exist
Jan 22 09:40:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5fb6731a-f2c6-4cbf-b258-3bd034ce5f73 does not exist
Jan 22 09:40:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7886a2d5-fd55-48d1-9428-e4406627d63c does not exist
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:57 np0005592157 podman[302104]: 2026-01-22 14:40:57.945374406 +0000 UTC m=+0.074209105 container create b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:40:57 np0005592157 systemd[1]: Started libpod-conmon-b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f.scope.
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:57.911212337 +0000 UTC m=+0.040047086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:40:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:58.057290266 +0000 UTC m=+0.186125005 container init b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:58.072041392 +0000 UTC m=+0.200876091 container start b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:40:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:40:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:40:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:40:58 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:58 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:58 np0005592157 laughing_chaplygin[302120]: 167 167
Jan 22 09:40:58 np0005592157 systemd[1]: libpod-b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f.scope: Deactivated successfully.
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:58.078652687 +0000 UTC m=+0.207487436 container attach b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:58.080541613 +0000 UTC m=+0.209376312 container died b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:40:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3ce9486a222631ddfd274992eaa5afa2337c6090ffce86d02766afe3df0ef11b-merged.mount: Deactivated successfully.
Jan 22 09:40:58 np0005592157 podman[302104]: 2026-01-22 14:40:58.139008836 +0000 UTC m=+0.267843535 container remove b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:40:58 np0005592157 systemd[1]: libpod-conmon-b8114eb13049278fb55cb9a3be8056ae1cf97c0b43a9584e3e0f9f19a9cb302f.scope: Deactivated successfully.
Jan 22 09:40:58 np0005592157 podman[302142]: 2026-01-22 14:40:58.371152843 +0000 UTC m=+0.076818659 container create ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:40:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 22 09:40:58 np0005592157 podman[302142]: 2026-01-22 14:40:58.339022235 +0000 UTC m=+0.044688091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:40:58 np0005592157 systemd[1]: Started libpod-conmon-ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198.scope.
Jan 22 09:40:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:40:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:40:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:40:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:40:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:40:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:40:58 np0005592157 podman[302142]: 2026-01-22 14:40:58.519469778 +0000 UTC m=+0.225135644 container init ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:40:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:58 np0005592157 podman[302142]: 2026-01-22 14:40:58.534106002 +0000 UTC m=+0.239771808 container start ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:40:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:40:58 np0005592157 podman[302142]: 2026-01-22 14:40:58.539444294 +0000 UTC m=+0.245110080 container attach ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:40:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:58.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:59 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:59 np0005592157 strange_tu[302159]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:40:59 np0005592157 strange_tu[302159]: --> relative data size: 1.0
Jan 22 09:40:59 np0005592157 strange_tu[302159]: --> All data devices are unavailable
Jan 22 09:40:59 np0005592157 systemd[1]: libpod-ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198.scope: Deactivated successfully.
Jan 22 09:40:59 np0005592157 podman[302142]: 2026-01-22 14:40:59.473900847 +0000 UTC m=+1.179566703 container died ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:40:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-55f8a12cc83f9774c500b28cfcceecc7e48f11f8497e1a02f78c7405c21cf6c8-merged.mount: Deactivated successfully.
Jan 22 09:40:59 np0005592157 podman[302142]: 2026-01-22 14:40:59.550300325 +0000 UTC m=+1.255966141 container remove ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:40:59 np0005592157 systemd[1]: libpod-conmon-ffa982d902ba30276e541e9ece4bbb7b5c161a18b7973ad8f9f83b605703c198.scope: Deactivated successfully.
Jan 22 09:41:00 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.27951401 +0000 UTC m=+0.056703890 container create 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:41:00 np0005592157 systemd[1]: Started libpod-conmon-4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6.scope.
Jan 22 09:41:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.255243307 +0000 UTC m=+0.032433267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.361792534 +0000 UTC m=+0.138982464 container init 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.370342356 +0000 UTC m=+0.147532266 container start 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.374226383 +0000 UTC m=+0.151416303 container attach 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:41:00 np0005592157 optimistic_mcclintock[302343]: 167 167
Jan 22 09:41:00 np0005592157 systemd[1]: libpod-4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6.scope: Deactivated successfully.
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.376969101 +0000 UTC m=+0.154159021 container died 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:41:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7dd50bddc24b1e1f62e82d9f9334f8ea32a0c005c4266ec742b101e6ff0fdcbf-merged.mount: Deactivated successfully.
Jan 22 09:41:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:00 np0005592157 podman[302327]: 2026-01-22 14:41:00.417820396 +0000 UTC m=+0.195010296 container remove 4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:41:00 np0005592157 systemd[1]: libpod-conmon-4f1432f48b63befcae898927b778b1b330fb3d302ede1dfef713733dc9da02c6.scope: Deactivated successfully.
Jan 22 09:41:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:00 np0005592157 podman[302370]: 2026-01-22 14:41:00.650613739 +0000 UTC m=+0.070872312 container create fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:41:00 np0005592157 systemd[1]: Started libpod-conmon-fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3.scope.
Jan 22 09:41:00 np0005592157 podman[302370]: 2026-01-22 14:41:00.623199098 +0000 UTC m=+0.043457741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:41:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:41:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3e903057611177e640f9f1c2de474870c86f5542dfbb814d8461a1271d87a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3e903057611177e640f9f1c2de474870c86f5542dfbb814d8461a1271d87a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3e903057611177e640f9f1c2de474870c86f5542dfbb814d8461a1271d87a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3e903057611177e640f9f1c2de474870c86f5542dfbb814d8461a1271d87a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:00 np0005592157 podman[302370]: 2026-01-22 14:41:00.74605348 +0000 UTC m=+0.166312083 container init fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:41:00 np0005592157 podman[302370]: 2026-01-22 14:41:00.759831372 +0000 UTC m=+0.180089955 container start fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:41:00 np0005592157 podman[302370]: 2026-01-22 14:41:00.76418806 +0000 UTC m=+0.184446633 container attach fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:41:01 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]: {
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:    "0": [
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:        {
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "devices": [
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "/dev/loop3"
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            ],
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "lv_name": "ceph_lv0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "lv_size": "7511998464",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "name": "ceph_lv0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "tags": {
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.cluster_name": "ceph",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.crush_device_class": "",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.encrypted": "0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.osd_id": "0",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.type": "block",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:                "ceph.vdo": "0"
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            },
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "type": "block",
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:            "vg_name": "ceph_vg0"
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:        }
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]:    ]
Jan 22 09:41:01 np0005592157 modest_elgamal[302386]: }
Jan 22 09:41:01 np0005592157 systemd[1]: libpod-fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3.scope: Deactivated successfully.
Jan 22 09:41:01 np0005592157 podman[302370]: 2026-01-22 14:41:01.51691804 +0000 UTC m=+0.937176593 container died fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:41:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8c3e903057611177e640f9f1c2de474870c86f5542dfbb814d8461a1271d87a0-merged.mount: Deactivated successfully.
Jan 22 09:41:01 np0005592157 podman[302370]: 2026-01-22 14:41:01.56444811 +0000 UTC m=+0.984706663 container remove fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:41:01 np0005592157 systemd[1]: libpod-conmon-fb4eebb2873af77b7625d2a96afa068d257897dd315085bc52621d6b4dd1b8d3.scope: Deactivated successfully.
Jan 22 09:41:02 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.211833733 +0000 UTC m=+0.037434261 container create 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:41:02 np0005592157 systemd[1]: Started libpod-conmon-0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd.scope.
Jan 22 09:41:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.194994405 +0000 UTC m=+0.020594963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.291897042 +0000 UTC m=+0.117497670 container init 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.300064035 +0000 UTC m=+0.125664573 container start 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.304269499 +0000 UTC m=+0.129870037 container attach 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:41:02 np0005592157 nervous_lehmann[302568]: 167 167
Jan 22 09:41:02 np0005592157 systemd[1]: libpod-0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd.scope: Deactivated successfully.
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.306748541 +0000 UTC m=+0.132349109 container died 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:41:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e4644c75d36412cf2238a8af830ee06cb0a8c3cef868d057b8cf972f9010832c-merged.mount: Deactivated successfully.
Jan 22 09:41:02 np0005592157 podman[302551]: 2026-01-22 14:41:02.349669537 +0000 UTC m=+0.175270065 container remove 0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:41:02 np0005592157 systemd[1]: libpod-conmon-0d9d1f53512edc225d131a7b4b873fc1128bece3a22afae9f1c51a53d9abe3cd.scope: Deactivated successfully.
Jan 22 09:41:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:02 np0005592157 podman[302593]: 2026-01-22 14:41:02.507083377 +0000 UTC m=+0.040787124 container create 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:41:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:02 np0005592157 systemd[1]: Started libpod-conmon-4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a.scope.
Jan 22 09:41:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:41:02 np0005592157 podman[302593]: 2026-01-22 14:41:02.488307321 +0000 UTC m=+0.022011078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:41:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ede80e0dd552a1b4b8123304c3e55f5dc04dd7df7c6dababb91dc60d2424fea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ede80e0dd552a1b4b8123304c3e55f5dc04dd7df7c6dababb91dc60d2424fea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ede80e0dd552a1b4b8123304c3e55f5dc04dd7df7c6dababb91dc60d2424fea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ede80e0dd552a1b4b8123304c3e55f5dc04dd7df7c6dababb91dc60d2424fea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:41:02 np0005592157 podman[302593]: 2026-01-22 14:41:02.601315338 +0000 UTC m=+0.135019085 container init 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:41:02 np0005592157 podman[302593]: 2026-01-22 14:41:02.609654095 +0000 UTC m=+0.143357882 container start 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:41:02 np0005592157 podman[302593]: 2026-01-22 14:41:02.613971272 +0000 UTC m=+0.147675059 container attach 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:41:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]: {
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:        "osd_id": 0,
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:        "type": "bluestore"
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]:    }
Jan 22 09:41:03 np0005592157 naughty_roentgen[302609]: }
Jan 22 09:41:03 np0005592157 systemd[1]: libpod-4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a.scope: Deactivated successfully.
Jan 22 09:41:03 np0005592157 podman[302593]: 2026-01-22 14:41:03.457529538 +0000 UTC m=+0.991233285 container died 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:41:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9ede80e0dd552a1b4b8123304c3e55f5dc04dd7df7c6dababb91dc60d2424fea-merged.mount: Deactivated successfully.
Jan 22 09:41:03 np0005592157 podman[302593]: 2026-01-22 14:41:03.510557785 +0000 UTC m=+1.044261522 container remove 4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:41:03 np0005592157 systemd[1]: libpod-conmon-4510190a35e74116fd84c29032581fdcda7fa7716915909d8900f2ff7beaaa9a.scope: Deactivated successfully.
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:41:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2efd0004-7d81-49b7-84b3-c1cf232cd60f does not exist
Jan 22 09:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 20cfc1fb-a2db-4e10-acdf-dbb6d20ca9b1 does not exist
Jan 22 09:41:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 81e62ab8-98fd-48fb-946c-b46d7dc29b5a does not exist
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:41:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:41:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:05 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:06.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:06 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:07 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:08.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:08 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:08 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:09 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:10 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:12.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:12 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:12 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:13 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:14.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:14 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:15 np0005592157 podman[302747]: 2026-01-22 14:41:15.344153052 +0000 UTC m=+0.075653471 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:41:15 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:16.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:16 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.661716) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877661752, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 827098, "memory_usage": 841472, "flush_reason": "Manual Compaction"}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877667513, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 597682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60764, "largest_seqno": 61513, "table_properties": {"data_size": 594250, "index_size": 1147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10006, "raw_average_key_size": 21, "raw_value_size": 586609, "raw_average_value_size": 1264, "num_data_blocks": 49, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092838, "oldest_key_time": 1769092838, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 5850 microseconds, and 2329 cpu microseconds.
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.667563) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 597682 bytes OK
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.667582) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.669327) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.669344) EVENT_LOG_v1 {"time_micros": 1769092877669339, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.669363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 823193, prev total WAL file size 823193, number of live WAL files 2.
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.669896) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373537' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(583KB)], [134(12MB)]
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877669990, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13364449, "oldest_snapshot_seqno": -1}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 11066 keys, 9678720 bytes, temperature: kUnknown
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877737099, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9678720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9618566, "index_size": 31369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 300581, "raw_average_key_size": 27, "raw_value_size": 9430580, "raw_average_value_size": 852, "num_data_blocks": 1165, "num_entries": 11066, "num_filter_entries": 11066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.737344) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9678720 bytes
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.9 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(38.6) write-amplify(16.2) OK, records in: 11571, records dropped: 505 output_compression: NoCompression
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739115) EVENT_LOG_v1 {"time_micros": 1769092877739108, "job": 82, "event": "compaction_finished", "compaction_time_micros": 67183, "compaction_time_cpu_micros": 37457, "output_level": 6, "num_output_files": 1, "total_output_size": 9678720, "num_input_records": 11571, "num_output_records": 11066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877739306, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877741573, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.669787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.741649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.741657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.741659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.741660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:41:17.741662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:41:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:41:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:41:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:41:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:18 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:21 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592157 podman[302771]: 2026-01-22 14:41:22.367990085 +0000 UTC m=+0.106931107 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 09:41:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:23 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:23 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:24 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:25 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:26 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:27 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:28 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:28 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:28.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:29 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:29.628 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:41:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:29.630 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:41:30 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:30.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:30.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:31 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:32.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:33 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:33 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:34 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:34.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:35 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:36.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:36 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:37 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:38 np0005592157 ceph-mon[74359]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:38 np0005592157 ceph-mon[74359]: Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:39 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:39.632 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:41:39 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:39 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:40.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:40 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:41 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:42.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:42 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:43 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:44.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:44.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:44 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:45 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:46 np0005592157 podman[302884]: 2026-01-22 14:41:46.120680394 +0000 UTC m=+0.058600137 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:41:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:46.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:46.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:47 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:41:47
Jan 22 09:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes']
Jan 22 09:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:47.617 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:47.617 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:41:47.617 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:41:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:48 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:48 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:41:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:49 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:50 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:41:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:41:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:50.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:41:51 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:52 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:41:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:52.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:53 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:53 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:53 np0005592157 podman[302932]: 2026-01-22 14:41:53.413979143 +0000 UTC m=+0.146854169 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 09:41:54 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:41:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:54.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:55 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:56 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:41:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:56.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1963 writes, 8927 keys, 1963 commit groups, 1.0 writes per commit group, ingest: 11.25 MB, 0.02 MB/s#012Interval WAL: 1963 writes, 1963 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     70.5      1.02              0.31        41    0.025       0      0       0.0       0.0#012  L6      1/0    9.23 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   5.1    117.4    100.2      3.65              1.39        40    0.091    328K    21K       0.0       0.0#012 Sum      1/0    9.23 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.1     91.8     93.7      4.67              1.70        81    0.058    328K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    134.2    131.3      0.61              0.31        14    0.044     78K   3606       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0    117.4    100.2      3.65              1.39        40    0.091    328K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     70.8      1.02              0.31        40    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.070, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.10 MB/s write, 0.42 GB read, 0.10 MB/s read, 4.7 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 46.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000328 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2462,44.00 MB,14.475%) FilterBlock(82,917.05 KB,0.29459%) IndexBlock(82,1.22 MB,0.399765%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:41:57 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:58 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:58 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:41:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:41:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:41:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:41:59 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:00 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:42:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:01 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:02 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:02.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:03 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:03 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:42:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:42:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 22 09:42:04 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:42:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:42:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 22 09:42:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c2d514ff-f2dd-4435-89f0-720b64317f96 does not exist
Jan 22 09:42:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d1b3e553-e195-4ffa-a77f-27a3de2473b8 does not exist
Jan 22 09:42:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 265ddbc6-c122-444f-8e4b-5f97887ec054 does not exist
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.742083944 +0000 UTC m=+0.050057105 container create 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:42:05 np0005592157 systemd[1]: Started libpod-conmon-6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb.scope.
Jan 22 09:42:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.715697288 +0000 UTC m=+0.023670499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.835529145 +0000 UTC m=+0.143502396 container init 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.843882493 +0000 UTC m=+0.151855654 container start 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:42:05 np0005592157 amazing_elion[303254]: 167 167
Jan 22 09:42:05 np0005592157 systemd[1]: libpod-6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb.scope: Deactivated successfully.
Jan 22 09:42:05 np0005592157 conmon[303254]: conmon 6fb68833c0536cd8866e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb.scope/container/memory.events
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.928165067 +0000 UTC m=+0.236138318 container attach 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.929270374 +0000 UTC m=+0.237243575 container died 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:42:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-09d2dcc3ad13fc2b2d13ae80fbea2d30217452c073ad2ff95189c961a3972d86-merged.mount: Deactivated successfully.
Jan 22 09:42:05 np0005592157 podman[303237]: 2026-01-22 14:42:05.975358249 +0000 UTC m=+0.283331410 container remove 6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:42:06 np0005592157 systemd[1]: libpod-conmon-6fb68833c0536cd8866e8f2b3de2c191513a4abcf6f0405f47a9d94dd8a76edb.scope: Deactivated successfully.
Jan 22 09:42:06 np0005592157 podman[303278]: 2026-01-22 14:42:06.270732477 +0000 UTC m=+0.076438400 container create 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:42:06 np0005592157 podman[303278]: 2026-01-22 14:42:06.240992848 +0000 UTC m=+0.046698831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:06 np0005592157 systemd[1]: Started libpod-conmon-3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36.scope.
Jan 22 09:42:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 09:42:06 np0005592157 podman[303278]: 2026-01-22 14:42:06.465189627 +0000 UTC m=+0.270895560 container init 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:42:06 np0005592157 podman[303278]: 2026-01-22 14:42:06.476410536 +0000 UTC m=+0.282116449 container start 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:42:06 np0005592157 podman[303278]: 2026-01-22 14:42:06.482579889 +0000 UTC m=+0.288285862 container attach 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:42:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:06.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:06 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:07 np0005592157 boring_cray[303344]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:42:07 np0005592157 boring_cray[303344]: --> relative data size: 1.0
Jan 22 09:42:07 np0005592157 boring_cray[303344]: --> All data devices are unavailable
Jan 22 09:42:07 np0005592157 systemd[1]: libpod-3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36.scope: Deactivated successfully.
Jan 22 09:42:07 np0005592157 podman[303278]: 2026-01-22 14:42:07.354875018 +0000 UTC m=+1.160580921 container died 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:42:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e35387eec27af28d665080850a83d99265cd50804ad0656438b5c88ff16786c7-merged.mount: Deactivated successfully.
Jan 22 09:42:07 np0005592157 podman[303278]: 2026-01-22 14:42:07.415321989 +0000 UTC m=+1.221027892 container remove 3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:42:07 np0005592157 systemd[1]: libpod-conmon-3e3d68f138b33211f4f0709d4a7a71006f155cf643566449cd928bb2485aef36.scope: Deactivated successfully.
Jan 22 09:42:07 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.043851153 +0000 UTC m=+0.042035285 container create 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:42:08 np0005592157 systemd[1]: Started libpod-conmon-70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66.scope.
Jan 22 09:42:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.02802784 +0000 UTC m=+0.026212002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.132353762 +0000 UTC m=+0.130537984 container init 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.137822178 +0000 UTC m=+0.136006340 container start 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.14233787 +0000 UTC m=+0.140522102 container attach 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:42:08 np0005592157 amazing_feynman[303532]: 167 167
Jan 22 09:42:08 np0005592157 systemd[1]: libpod-70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66.scope: Deactivated successfully.
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.143914739 +0000 UTC m=+0.142098911 container died 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:42:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-128ec1619e5571e069e6b086a57b9011575e462124858408f326f364189ee004-merged.mount: Deactivated successfully.
Jan 22 09:42:08 np0005592157 podman[303516]: 2026-01-22 14:42:08.190531677 +0000 UTC m=+0.188715849 container remove 70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:42:08 np0005592157 systemd[1]: libpod-conmon-70bb58e856b49818c3874f2f8d65a7ff7c1a8cc85a3ff8e8db25131f34c77b66.scope: Deactivated successfully.
Jan 22 09:42:08 np0005592157 podman[303554]: 2026-01-22 14:42:08.387111851 +0000 UTC m=+0.057428678 container create 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:42:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 09:42:08 np0005592157 podman[303554]: 2026-01-22 14:42:08.357385342 +0000 UTC m=+0.027702249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:08 np0005592157 systemd[1]: Started libpod-conmon-50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de.scope.
Jan 22 09:42:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9102e3d9a66c8ed74162fa3fe79b4f3ab9438354de418c95dc3373fa25f120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9102e3d9a66c8ed74162fa3fe79b4f3ab9438354de418c95dc3373fa25f120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9102e3d9a66c8ed74162fa3fe79b4f3ab9438354de418c95dc3373fa25f120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9102e3d9a66c8ed74162fa3fe79b4f3ab9438354de418c95dc3373fa25f120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:08 np0005592157 podman[303554]: 2026-01-22 14:42:08.51631957 +0000 UTC m=+0.186636457 container init 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:42:08 np0005592157 podman[303554]: 2026-01-22 14:42:08.536138423 +0000 UTC m=+0.206455280 container start 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 09:42:08 np0005592157 podman[303554]: 2026-01-22 14:42:08.540820529 +0000 UTC m=+0.211137386 container attach 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:42:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:08.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:08.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:08 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:08 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]: {
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:    "0": [
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:        {
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "devices": [
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "/dev/loop3"
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            ],
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "lv_name": "ceph_lv0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "lv_size": "7511998464",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "name": "ceph_lv0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "tags": {
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.cluster_name": "ceph",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.crush_device_class": "",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.encrypted": "0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.osd_id": "0",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.type": "block",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:                "ceph.vdo": "0"
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            },
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "type": "block",
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:            "vg_name": "ceph_vg0"
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:        }
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]:    ]
Jan 22 09:42:09 np0005592157 friendly_williamson[303572]: }
Jan 22 09:42:09 np0005592157 systemd[1]: libpod-50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de.scope: Deactivated successfully.
Jan 22 09:42:09 np0005592157 conmon[303572]: conmon 50edffaac56fafae0b65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de.scope/container/memory.events
Jan 22 09:42:09 np0005592157 podman[303554]: 2026-01-22 14:42:09.364599463 +0000 UTC m=+1.034916300 container died 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 09:42:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5b9102e3d9a66c8ed74162fa3fe79b4f3ab9438354de418c95dc3373fa25f120-merged.mount: Deactivated successfully.
Jan 22 09:42:09 np0005592157 podman[303554]: 2026-01-22 14:42:09.431036234 +0000 UTC m=+1.101353071 container remove 50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 22 09:42:09 np0005592157 systemd[1]: libpod-conmon-50edffaac56fafae0b65f5c7ea4fe9816f65297a2165365609c9ce9d9c9c94de.scope: Deactivated successfully.
Jan 22 09:42:09 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.336805145 +0000 UTC m=+0.060745660 container create 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:42:10 np0005592157 systemd[1]: Started libpod-conmon-5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc.scope.
Jan 22 09:42:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.316989563 +0000 UTC m=+0.040930118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.426148295 +0000 UTC m=+0.150088840 container init 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.433495687 +0000 UTC m=+0.157436202 container start 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:42:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 09:42:10 np0005592157 modest_hoover[303750]: 167 167
Jan 22 09:42:10 np0005592157 systemd[1]: libpod-5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc.scope: Deactivated successfully.
Jan 22 09:42:10 np0005592157 conmon[303750]: conmon 5ac99db185c3bbd3bcc6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc.scope/container/memory.events
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.444730886 +0000 UTC m=+0.168671391 container attach 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.445463664 +0000 UTC m=+0.169404169 container died 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:42:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e465952000efc3ce432fc70e2b47e92864e7382b9e3dafef50f7c1a9430e6fcd-merged.mount: Deactivated successfully.
Jan 22 09:42:10 np0005592157 podman[303734]: 2026-01-22 14:42:10.482183487 +0000 UTC m=+0.206123982 container remove 5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:42:10 np0005592157 systemd[1]: libpod-conmon-5ac99db185c3bbd3bcc665e020181ecd860e264f6d7c0f9e3fc5cea9a2b61fbc.scope: Deactivated successfully.
Jan 22 09:42:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:10.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:10 np0005592157 podman[303773]: 2026-01-22 14:42:10.688104051 +0000 UTC m=+0.075022475 container create d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:42:10 np0005592157 podman[303773]: 2026-01-22 14:42:10.634960311 +0000 UTC m=+0.021878795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:42:10 np0005592157 systemd[1]: Started libpod-conmon-d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f.scope.
Jan 22 09:42:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:42:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436798dcc731213aa0d065caa5807ece3fc51b4b22ccbf9cf71b428167e6cdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436798dcc731213aa0d065caa5807ece3fc51b4b22ccbf9cf71b428167e6cdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436798dcc731213aa0d065caa5807ece3fc51b4b22ccbf9cf71b428167e6cdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436798dcc731213aa0d065caa5807ece3fc51b4b22ccbf9cf71b428167e6cdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:42:10 np0005592157 podman[303773]: 2026-01-22 14:42:10.876881541 +0000 UTC m=+0.263800055 container init d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:42:10 np0005592157 podman[303773]: 2026-01-22 14:42:10.887206607 +0000 UTC m=+0.274125061 container start d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:42:10 np0005592157 podman[303773]: 2026-01-22 14:42:10.891617867 +0000 UTC m=+0.278536381 container attach d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:42:11 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]: {
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:        "osd_id": 0,
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:        "type": "bluestore"
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]:    }
Jan 22 09:42:11 np0005592157 youthful_blackburn[303790]: }
Jan 22 09:42:11 np0005592157 systemd[1]: libpod-d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f.scope: Deactivated successfully.
Jan 22 09:42:11 np0005592157 podman[303773]: 2026-01-22 14:42:11.733551022 +0000 UTC m=+1.120469516 container died d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:42:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3436798dcc731213aa0d065caa5807ece3fc51b4b22ccbf9cf71b428167e6cdc-merged.mount: Deactivated successfully.
Jan 22 09:42:11 np0005592157 podman[303773]: 2026-01-22 14:42:11.794886316 +0000 UTC m=+1.181804750 container remove d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:42:11 np0005592157 systemd[1]: libpod-conmon-d63299d7dadaf924d71492e4ecdda79654dec7600c290f9130039ffdac5fbb1f.scope: Deactivated successfully.
Jan 22 09:42:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:42:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:42:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dd712632-7534-4db0-81eb-380ea7af3cf4 does not exist
Jan 22 09:42:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 177747d7-865d-425e-8b1d-47dc52710907 does not exist
Jan 22 09:42:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e27753f5-29b1-4a19-a878-8e03f8820e8a does not exist
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 09:42:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:12.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:13 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:13 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:14 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 09:42:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:14.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:15 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:16 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:16 np0005592157 podman[303877]: 2026-01-22 14:42:16.375523056 +0000 UTC m=+0.082519021 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:16.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:17 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:42:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:42:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:42:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:18.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:18.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:19 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:20 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:42:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:20.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:21 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:22.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:23 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:23 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:24 np0005592157 podman[303901]: 2026-01-22 14:42:24.384351188 +0000 UTC m=+0.127196481 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:42:24 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Jan 22 09:42:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:24.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:24.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:25 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 09:42:26 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:26.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 22 09:42:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 22 09:42:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.573642) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948573701, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1119, "num_deletes": 252, "total_data_size": 1364196, "memory_usage": 1391008, "flush_reason": "Manual Compaction"}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948588994, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 1341494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61514, "largest_seqno": 62632, "table_properties": {"data_size": 1336559, "index_size": 2266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13171, "raw_average_key_size": 20, "raw_value_size": 1325558, "raw_average_value_size": 2097, "num_data_blocks": 98, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092877, "oldest_key_time": 1769092877, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 15448 microseconds, and 7548 cpu microseconds.
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.589083) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 1341494 bytes OK
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.589115) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591268) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591295) EVENT_LOG_v1 {"time_micros": 1769092948591287, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591320) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1358955, prev total WAL file size 1358955, number of live WAL files 2.
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.592215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(1310KB)], [137(9451KB)]
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948592331, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11020214, "oldest_snapshot_seqno": -1}
Jan 22 09:42:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:28.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 11177 keys, 9368657 bytes, temperature: kUnknown
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948701526, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9368657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9308249, "index_size": 31367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 304088, "raw_average_key_size": 27, "raw_value_size": 9118688, "raw_average_value_size": 815, "num_data_blocks": 1161, "num_entries": 11177, "num_filter_entries": 11177, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.702602) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9368657 bytes
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.714577) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.8 rd, 85.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 11698, records dropped: 521 output_compression: NoCompression
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.714609) EVENT_LOG_v1 {"time_micros": 1769092948714595, "job": 84, "event": "compaction_finished", "compaction_time_micros": 109296, "compaction_time_cpu_micros": 49868, "output_level": 6, "num_output_files": 1, "total_output_size": 9368657, "num_input_records": 11698, "num_output_records": 11177, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948715618, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948718190, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.592095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.718377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.718389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.718392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.718396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:42:28.718399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:30 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 09:42:30 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:30.494 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:42:30 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:30.495 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:42:30 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:30.496 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:42:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:30.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:31 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:31 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:32 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 09:42:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:32.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:33 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:33 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:34 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Jan 22 09:42:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:34.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:35 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:36 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:36.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:37 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:38 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:38 np0005592157 ceph-mon[74359]: Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:38.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:39 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:40 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:40.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:41 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:42.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:42 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:43 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:43 np0005592157 ceph-mon[74359]: Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:44.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:44 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:45 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:42:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:46.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:46.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:46 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592157 podman[304012]: 2026-01-22 14:42:46.871745023 +0000 UTC m=+0.078885011 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:42:47
Jan 22 09:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'volumes', '.rgw.root', 'images', 'backups']
Jan 22 09:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:47.617 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:47.618 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:42:47.618 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:42:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:47 np0005592157 ceph-mon[74359]: Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:47 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:48.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:48.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:48 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:49 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:50.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:50 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:51 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:52.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:52.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:52 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592157 ceph-mon[74359]: Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:53 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:54.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:54 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:55 np0005592157 podman[304061]: 2026-01-22 14:42:55.351370731 +0000 UTC m=+0.087649598 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:42:55 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:56 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:57 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:57 np0005592157 ceph-mon[74359]: Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:42:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:42:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:42:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:42:58 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:00 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:02.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:03 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:03 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:43:04 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:43:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:43:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:05 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:06 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:07 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:08 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:08 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:08.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:08.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:09 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:10.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:10 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:12.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a77f9faf-8711-4939-8853-fca03ae6862b does not exist
Jan 22 09:43:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d9075081-7490-45a2-9e2a-dc1cc14375e0 does not exist
Jan 22 09:43:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2e512f87-c5bf-43cd-8553-9bd78afe000a does not exist
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:43:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.073234155 +0000 UTC m=+0.038484868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.275318437 +0000 UTC m=+0.240569070 container create 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:43:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:43:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:43:14 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:14 np0005592157 systemd[1]: Started libpod-conmon-84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e.scope.
Jan 22 09:43:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.524010327 +0000 UTC m=+0.489260990 container init 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.531269557 +0000 UTC m=+0.496520180 container start 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.534871737 +0000 UTC m=+0.500122410 container attach 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:43:14 np0005592157 charming_bardeen[304435]: 167 167
Jan 22 09:43:14 np0005592157 systemd[1]: libpod-84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e.scope: Deactivated successfully.
Jan 22 09:43:14 np0005592157 conmon[304435]: conmon 84e5c51df13280cb1553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e.scope/container/memory.events
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.539231445 +0000 UTC m=+0.504482088 container died 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:43:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4711ebc3668b4b86dc73766d618901f5897da7f6c6d8a9b4d5ef4b1c19c5cfc2-merged.mount: Deactivated successfully.
Jan 22 09:43:14 np0005592157 podman[304419]: 2026-01-22 14:43:14.615995162 +0000 UTC m=+0.581245805 container remove 84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:43:14 np0005592157 systemd[1]: libpod-conmon-84e5c51df13280cb15536c77fb2eb2bfa0402adc69128425c65bffc91c00d35e.scope: Deactivated successfully.
Jan 22 09:43:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:14 np0005592157 podman[304458]: 2026-01-22 14:43:14.879611583 +0000 UTC m=+0.106882087 container create 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:43:14 np0005592157 podman[304458]: 2026-01-22 14:43:14.81346741 +0000 UTC m=+0.040737934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:14 np0005592157 systemd[1]: Started libpod-conmon-5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0.scope.
Jan 22 09:43:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:15 np0005592157 podman[304458]: 2026-01-22 14:43:15.039114606 +0000 UTC m=+0.266385140 container init 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:43:15 np0005592157 podman[304458]: 2026-01-22 14:43:15.047614847 +0000 UTC m=+0.274885331 container start 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:43:15 np0005592157 podman[304458]: 2026-01-22 14:43:15.052480768 +0000 UTC m=+0.279751252 container attach 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:43:15 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:15 np0005592157 zealous_easley[304474]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:43:15 np0005592157 zealous_easley[304474]: --> relative data size: 1.0
Jan 22 09:43:15 np0005592157 zealous_easley[304474]: --> All data devices are unavailable
Jan 22 09:43:15 np0005592157 systemd[1]: libpod-5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0.scope: Deactivated successfully.
Jan 22 09:43:15 np0005592157 podman[304458]: 2026-01-22 14:43:15.884511104 +0000 UTC m=+1.111781558 container died 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:43:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-aec216ac5dbbf70d8e1517021c58357f69466dbe5ce9d90adb7dfe8e85b2f21c-merged.mount: Deactivated successfully.
Jan 22 09:43:16 np0005592157 podman[304458]: 2026-01-22 14:43:16.232279676 +0000 UTC m=+1.459550160 container remove 5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:43:16 np0005592157 systemd[1]: libpod-conmon-5461fc4a5f9649013b5bd85b152859c218daee7ab251ba8a77fdfb35beefb6b0.scope: Deactivated successfully.
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:16 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:16.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.004597228 +0000 UTC m=+0.026042908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.202118346 +0000 UTC m=+0.223564017 container create 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:43:17 np0005592157 systemd[1]: Started libpod-conmon-95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b.scope.
Jan 22 09:43:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.289882967 +0000 UTC m=+0.311328707 container init 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.296067531 +0000 UTC m=+0.317513201 container start 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:43:17 np0005592157 serene_ramanujan[304660]: 167 167
Jan 22 09:43:17 np0005592157 systemd[1]: libpod-95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b.scope: Deactivated successfully.
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.30046094 +0000 UTC m=+0.321906630 container attach 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.301024484 +0000 UTC m=+0.322470204 container died 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:43:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5d68f9f56b6f8f3d94bb90006f0a22bd25e03a3efba1fe4bc94aaaf8b8fd04c-merged.mount: Deactivated successfully.
Jan 22 09:43:17 np0005592157 podman[304643]: 2026-01-22 14:43:17.508197092 +0000 UTC m=+0.529642762 container remove 95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ramanujan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:43:17 np0005592157 podman[304657]: 2026-01-22 14:43:17.533340317 +0000 UTC m=+0.281817864 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:43:17 np0005592157 systemd[1]: libpod-conmon-95565c21ace85f9e9d1d39feb919534fc0db07877ea228d857f01ab199414f2b.scope: Deactivated successfully.
Jan 22 09:43:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:17 np0005592157 podman[304705]: 2026-01-22 14:43:17.683871598 +0000 UTC m=+0.034856507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:17 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:17 np0005592157 podman[304705]: 2026-01-22 14:43:17.787402021 +0000 UTC m=+0.138386910 container create 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:43:18 np0005592157 systemd[1]: Started libpod-conmon-4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6.scope.
Jan 22 09:43:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed11494e7cc51f9e9a4220ab75fe6da5d0977f54c3622162599fbd62edc89f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed11494e7cc51f9e9a4220ab75fe6da5d0977f54c3622162599fbd62edc89f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed11494e7cc51f9e9a4220ab75fe6da5d0977f54c3622162599fbd62edc89f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed11494e7cc51f9e9a4220ab75fe6da5d0977f54c3622162599fbd62edc89f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:18 np0005592157 podman[304705]: 2026-01-22 14:43:18.061372689 +0000 UTC m=+0.412357588 container init 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:43:18 np0005592157 podman[304705]: 2026-01-22 14:43:18.074475724 +0000 UTC m=+0.425460653 container start 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:43:18 np0005592157 podman[304705]: 2026-01-22 14:43:18.078618257 +0000 UTC m=+0.429603146 container attach 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:43:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:18.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]: {
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:    "0": [
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:        {
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "devices": [
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "/dev/loop3"
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            ],
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "lv_name": "ceph_lv0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "lv_size": "7511998464",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "name": "ceph_lv0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "tags": {
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.cluster_name": "ceph",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.crush_device_class": "",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.encrypted": "0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.osd_id": "0",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.type": "block",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:                "ceph.vdo": "0"
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            },
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "type": "block",
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:            "vg_name": "ceph_vg0"
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:        }
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]:    ]
Jan 22 09:43:18 np0005592157 recursing_keldysh[304721]: }
Jan 22 09:43:18 np0005592157 systemd[1]: libpod-4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6.scope: Deactivated successfully.
Jan 22 09:43:18 np0005592157 podman[304705]: 2026-01-22 14:43:18.860373493 +0000 UTC m=+1.211358372 container died 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:43:18 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:18 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fed11494e7cc51f9e9a4220ab75fe6da5d0977f54c3622162599fbd62edc89f0-merged.mount: Deactivated successfully.
Jan 22 09:43:19 np0005592157 podman[304705]: 2026-01-22 14:43:19.219184869 +0000 UTC m=+1.570169758 container remove 4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:43:19 np0005592157 systemd[1]: libpod-conmon-4d98d1ab80776deb22af7d0021f0beefbeb26ae4daf2f543e4e1c0b42cdd33f6.scope: Deactivated successfully.
Jan 22 09:43:19 np0005592157 podman[304885]: 2026-01-22 14:43:19.891409274 +0000 UTC m=+0.091497955 container create 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:43:19 np0005592157 podman[304885]: 2026-01-22 14:43:19.821316262 +0000 UTC m=+0.021404963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:19 np0005592157 systemd[1]: Started libpod-conmon-1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d.scope.
Jan 22 09:43:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:20 np0005592157 podman[304885]: 2026-01-22 14:43:20.051243036 +0000 UTC m=+0.251331787 container init 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:43:20 np0005592157 podman[304885]: 2026-01-22 14:43:20.057753458 +0000 UTC m=+0.257842139 container start 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:43:20 np0005592157 podman[304885]: 2026-01-22 14:43:20.061439089 +0000 UTC m=+0.261527890 container attach 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:43:20 np0005592157 determined_thompson[304901]: 167 167
Jan 22 09:43:20 np0005592157 systemd[1]: libpod-1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d.scope: Deactivated successfully.
Jan 22 09:43:20 np0005592157 podman[304885]: 2026-01-22 14:43:20.065788487 +0000 UTC m=+0.265877178 container died 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:43:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-52a2123c184f6dffa38df3f4b26ef0460089ddab413052b314e7e89fb1c363b4-merged.mount: Deactivated successfully.
Jan 22 09:43:20 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:20 np0005592157 podman[304885]: 2026-01-22 14:43:20.300980582 +0000 UTC m=+0.501069293 container remove 1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:43:20 np0005592157 systemd[1]: libpod-conmon-1771465c439c651684e9a291e6bb2852556723c5a19b2fcfae26a265751d018d.scope: Deactivated successfully.
Jan 22 09:43:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:20 np0005592157 podman[304925]: 2026-01-22 14:43:20.484037641 +0000 UTC m=+0.021701750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:43:20 np0005592157 podman[304925]: 2026-01-22 14:43:20.633263549 +0000 UTC m=+0.170927638 container create eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:43:20 np0005592157 systemd[1]: Started libpod-conmon-eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf.scope.
Jan 22 09:43:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:43:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab4ac518afeb5c992e52f083b59cc4d29ef7e4be146da09d4029cf21645c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab4ac518afeb5c992e52f083b59cc4d29ef7e4be146da09d4029cf21645c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab4ac518afeb5c992e52f083b59cc4d29ef7e4be146da09d4029cf21645c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab4ac518afeb5c992e52f083b59cc4d29ef7e4be146da09d4029cf21645c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:43:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:20.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:20 np0005592157 podman[304925]: 2026-01-22 14:43:20.845666668 +0000 UTC m=+0.383330837 container init eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:43:20 np0005592157 podman[304925]: 2026-01-22 14:43:20.859125132 +0000 UTC m=+0.396789251 container start eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:43:20 np0005592157 podman[304925]: 2026-01-22 14:43:20.891757643 +0000 UTC m=+0.429421762 container attach eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:43:21 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:21 np0005592157 practical_kepler[304942]: {
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:        "osd_id": 0,
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:        "type": "bluestore"
Jan 22 09:43:21 np0005592157 practical_kepler[304942]:    }
Jan 22 09:43:21 np0005592157 practical_kepler[304942]: }
Jan 22 09:43:21 np0005592157 systemd[1]: libpod-eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf.scope: Deactivated successfully.
Jan 22 09:43:21 np0005592157 podman[304925]: 2026-01-22 14:43:21.742478073 +0000 UTC m=+1.280142162 container died eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:43:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-11ab4ac518afeb5c992e52f083b59cc4d29ef7e4be146da09d4029cf21645c20-merged.mount: Deactivated successfully.
Jan 22 09:43:21 np0005592157 podman[304925]: 2026-01-22 14:43:21.994951277 +0000 UTC m=+1.532615356 container remove eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:43:22 np0005592157 systemd[1]: libpod-conmon-eeef28cd7210bad151d52155b411d2d2f6cd9eb0de740fadefd2b22dbe0d9adf.scope: Deactivated successfully.
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev abaee26f-eeb1-4fb4-8477-8556e3bb8f1f does not exist
Jan 22 09:43:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8a87e9d8-e065-4fbb-a132-4026eaca3578 does not exist
Jan 22 09:43:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 847c6e7e-5fff-4668-8384-f5a48956d93c does not exist
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:22.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:23.039 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:43:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:23.041 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:43:23 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:23 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:24 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:24.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:24.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:25 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:26 np0005592157 podman[305031]: 2026-01-22 14:43:26.404743608 +0000 UTC m=+0.140002510 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 09:43:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:26.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:26 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:27 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:28.043 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:43:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:30 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:31 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:33 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:33 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:34 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:35 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:36 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:43:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:43:37 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:38 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:38 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:39 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:39 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:40 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:42 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:43 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:43 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:44 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:44.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:44.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:45 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:46 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:43:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:46.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:47 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:43:47
Jan 22 09:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root']
Jan 22 09:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:47.618 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:47.619 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:43:47.619 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:43:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:48 np0005592157 podman[305171]: 2026-01-22 14:43:48.361548769 +0000 UTC m=+0.081191168 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:43:48 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:48 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:49 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:50 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:51 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:52 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:53 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:53 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:54 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:55 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:56 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:56.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:57 np0005592157 podman[305194]: 2026-01-22 14:43:57.404346931 +0000 UTC m=+0.129745125 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:43:57 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:57 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:43:57 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:43:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:43:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:58.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:58 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:58 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:43:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:43:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:43:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:00.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:00 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:01 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:02.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:03 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:03 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:04 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:44:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:44:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:04.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:05 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:06 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:07 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:08 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:08 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:10 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:10.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:11 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:12 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:44:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:44:13 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:13 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:14 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:14.901 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:14 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:14.902 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:44:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:44:15 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:16.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:17 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:18 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:18 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:19 np0005592157 podman[305283]: 2026-01-22 14:44:19.359182402 +0000 UTC m=+0.083983427 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:44:19 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:20 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:21.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:21 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:21.904 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:44:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:22 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:23 np0005592157 podman[305473]: 2026-01-22 14:44:23.619777496 +0000 UTC m=+0.229525295 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:44:23 np0005592157 podman[305494]: 2026-01-22 14:44:23.861038741 +0000 UTC m=+0.123773867 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:44:24 np0005592157 podman[305473]: 2026-01-22 14:44:24.11412823 +0000 UTC m=+0.723876009 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:44:24 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:24 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:44:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:44:25 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592157 podman[305627]: 2026-01-22 14:44:25.310077349 +0000 UTC m=+0.078605594 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:44:25 np0005592157 podman[305627]: 2026-01-22 14:44:25.319021111 +0000 UTC m=+0.087549356 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:44:25 np0005592157 podman[305692]: 2026-01-22 14:44:25.595201524 +0000 UTC m=+0.062580066 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.openshift.expose-services=, version=2.2.4, vendor=Red Hat, Inc., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 09:44:25 np0005592157 podman[305692]: 2026-01-22 14:44:25.605827048 +0000 UTC m=+0.073205570 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 09:44:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:44:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:44:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2181f876-1197-445b-b867-9f25a5d5d581 does not exist
Jan 22 09:44:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ce08221a-47a0-4b4c-97e6-bb5ae98612f7 does not exist
Jan 22 09:44:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ae0acb84-ccfe-4ce1-90d0-4af1d80f0fa0 does not exist
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:44:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:44:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:44:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.214292138 +0000 UTC m=+0.047627025 container create 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:44:27 np0005592157 systemd[1]: Started libpod-conmon-10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014.scope.
Jan 22 09:44:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.191584693 +0000 UTC m=+0.024919580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.305567206 +0000 UTC m=+0.138902153 container init 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.313061022 +0000 UTC m=+0.146395869 container start 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.318093347 +0000 UTC m=+0.151428304 container attach 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 09:44:27 np0005592157 lucid_torvalds[306014]: 167 167
Jan 22 09:44:27 np0005592157 systemd[1]: libpod-10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014.scope: Deactivated successfully.
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.322797374 +0000 UTC m=+0.156132261 container died 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:44:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-53c457f82b58f2d6cc3f7426ea64ff3a56593460dff89fa57928ecb6ae24eddb-merged.mount: Deactivated successfully.
Jan 22 09:44:27 np0005592157 podman[305998]: 2026-01-22 14:44:27.379102283 +0000 UTC m=+0.212437160 container remove 10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:44:27 np0005592157 systemd[1]: libpod-conmon-10aae78cabb0bfb3b7007e2365aba58377cf4ecbb60821f391eb9f50e7e40014.scope: Deactivated successfully.
Jan 22 09:44:27 np0005592157 podman[306037]: 2026-01-22 14:44:27.582303883 +0000 UTC m=+0.053589663 container create 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:44:27 np0005592157 systemd[1]: Started libpod-conmon-155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262.scope.
Jan 22 09:44:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:27 np0005592157 podman[306037]: 2026-01-22 14:44:27.557894746 +0000 UTC m=+0.029180616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:27 np0005592157 podman[306037]: 2026-01-22 14:44:27.674632477 +0000 UTC m=+0.145918337 container init 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:44:27 np0005592157 podman[306037]: 2026-01-22 14:44:27.687260031 +0000 UTC m=+0.158545831 container start 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:44:27 np0005592157 podman[306037]: 2026-01-22 14:44:27.691674931 +0000 UTC m=+0.162960741 container attach 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:44:27 np0005592157 podman[306051]: 2026-01-22 14:44:27.734367182 +0000 UTC m=+0.110133238 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 09:44:27 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:28 np0005592157 flamboyant_hypatia[306055]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:44:28 np0005592157 flamboyant_hypatia[306055]: --> relative data size: 1.0
Jan 22 09:44:28 np0005592157 flamboyant_hypatia[306055]: --> All data devices are unavailable
Jan 22 09:44:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:28 np0005592157 systemd[1]: libpod-155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262.scope: Deactivated successfully.
Jan 22 09:44:28 np0005592157 podman[306037]: 2026-01-22 14:44:28.532784572 +0000 UTC m=+1.004070352 container died 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:44:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ed3d02826cfbe9a04e42abe57e4c53e931a563664db253d027caa621bb458674-merged.mount: Deactivated successfully.
Jan 22 09:44:28 np0005592157 podman[306037]: 2026-01-22 14:44:28.767837503 +0000 UTC m=+1.239123293 container remove 155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:44:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:28 np0005592157 systemd[1]: libpod-conmon-155e1da888f62e45c668b67711bf9c71080bf5eb3cb96f62f27e4cb006c97262.scope: Deactivated successfully.
Jan 22 09:44:28 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:28 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.550456691 +0000 UTC m=+0.054023983 container create b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:44:29 np0005592157 systemd[1]: Started libpod-conmon-b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1.scope.
Jan 22 09:44:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.524974248 +0000 UTC m=+0.028541590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.622372768 +0000 UTC m=+0.125940090 container init b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.631008113 +0000 UTC m=+0.134575415 container start b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:44:29 np0005592157 stupefied_borg[306314]: 167 167
Jan 22 09:44:29 np0005592157 systemd[1]: libpod-b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1.scope: Deactivated successfully.
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.636032518 +0000 UTC m=+0.139599860 container attach b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.636743966 +0000 UTC m=+0.140311268 container died b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:44:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9bd01ed79fdb92874275a8bf0fba6055a2734f19997575f0dd18b2e80aa5796b-merged.mount: Deactivated successfully.
Jan 22 09:44:29 np0005592157 podman[306298]: 2026-01-22 14:44:29.677075788 +0000 UTC m=+0.180643060 container remove b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:44:29 np0005592157 systemd[1]: libpod-conmon-b7b25644d867ac979c71f0c836d44828cb7ed80fe9055683274db7916db751c1.scope: Deactivated successfully.
Jan 22 09:44:29 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:29 np0005592157 podman[306340]: 2026-01-22 14:44:29.861796088 +0000 UTC m=+0.040624490 container create 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:44:29 np0005592157 systemd[1]: Started libpod-conmon-02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0.scope.
Jan 22 09:44:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec76a4e94e3c17a3c41ad181aea0a82bfceaed4ee75873a4c45475aa37375fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec76a4e94e3c17a3c41ad181aea0a82bfceaed4ee75873a4c45475aa37375fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec76a4e94e3c17a3c41ad181aea0a82bfceaed4ee75873a4c45475aa37375fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec76a4e94e3c17a3c41ad181aea0a82bfceaed4ee75873a4c45475aa37375fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:29 np0005592157 podman[306340]: 2026-01-22 14:44:29.940976576 +0000 UTC m=+0.119804998 container init 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:44:29 np0005592157 podman[306340]: 2026-01-22 14:44:29.844476508 +0000 UTC m=+0.023304920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:29 np0005592157 podman[306340]: 2026-01-22 14:44:29.946000101 +0000 UTC m=+0.124828483 container start 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:44:29 np0005592157 podman[306340]: 2026-01-22 14:44:29.949744814 +0000 UTC m=+0.128573226 container attach 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:44:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]: {
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:    "0": [
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:        {
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "devices": [
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "/dev/loop3"
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            ],
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "lv_name": "ceph_lv0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "lv_size": "7511998464",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "name": "ceph_lv0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "tags": {
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.cluster_name": "ceph",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.crush_device_class": "",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.encrypted": "0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.osd_id": "0",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.type": "block",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:                "ceph.vdo": "0"
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            },
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "type": "block",
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:            "vg_name": "ceph_vg0"
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:        }
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]:    ]
Jan 22 09:44:30 np0005592157 quirky_wilson[306356]: }
Jan 22 09:44:30 np0005592157 systemd[1]: libpod-02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0.scope: Deactivated successfully.
Jan 22 09:44:30 np0005592157 podman[306340]: 2026-01-22 14:44:30.837239487 +0000 UTC m=+1.016067899 container died 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:44:30 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fec76a4e94e3c17a3c41ad181aea0a82bfceaed4ee75873a4c45475aa37375fb-merged.mount: Deactivated successfully.
Jan 22 09:44:30 np0005592157 podman[306340]: 2026-01-22 14:44:30.911100462 +0000 UTC m=+1.089928844 container remove 02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:44:30 np0005592157 systemd[1]: libpod-conmon-02dcbc3fd0ca0b7c437b394387eb30fe55a880cb40dbc24e2353be5e906a98c0.scope: Deactivated successfully.
Jan 22 09:44:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:31.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 13K writes, 46K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 3926 syncs, 3.41 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1039 writes, 2011 keys, 1039 commit groups, 1.0 writes per commit group, ingest: 0.80 MB, 0.00 MB/s#012Interval WAL: 1039 writes, 494 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.602063883 +0000 UTC m=+0.057095320 container create d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:44:31 np0005592157 systemd[1]: Started libpod-conmon-d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b.scope.
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.574542399 +0000 UTC m=+0.029573946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.693242218 +0000 UTC m=+0.148273675 container init d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.704848607 +0000 UTC m=+0.159880034 container start d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.708644491 +0000 UTC m=+0.163675958 container attach d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:44:31 np0005592157 frosty_lovelace[306534]: 167 167
Jan 22 09:44:31 np0005592157 systemd[1]: libpod-d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b.scope: Deactivated successfully.
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.711697377 +0000 UTC m=+0.166728824 container died d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:44:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-503e9e8525aaadc624d73a7622bcab115123c4d3fe00e293fd53f8582b9acf2c-merged.mount: Deactivated successfully.
Jan 22 09:44:31 np0005592157 podman[306517]: 2026-01-22 14:44:31.758546461 +0000 UTC m=+0.213577898 container remove d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:44:31 np0005592157 systemd[1]: libpod-conmon-d4031397a5671bd44d6783738f6e79fcbaac3fa302b12d2f87009558a689040b.scope: Deactivated successfully.
Jan 22 09:44:31 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:31 np0005592157 podman[306558]: 2026-01-22 14:44:31.925263734 +0000 UTC m=+0.047142052 container create ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:44:31 np0005592157 systemd[1]: Started libpod-conmon-ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8.scope.
Jan 22 09:44:32 np0005592157 podman[306558]: 2026-01-22 14:44:31.908281672 +0000 UTC m=+0.030160020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:44:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:44:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af3b6190a32ec8bfd43ff514a3e2a9581c8ba97e03d340a024d39cc66d2656a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af3b6190a32ec8bfd43ff514a3e2a9581c8ba97e03d340a024d39cc66d2656a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af3b6190a32ec8bfd43ff514a3e2a9581c8ba97e03d340a024d39cc66d2656a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af3b6190a32ec8bfd43ff514a3e2a9581c8ba97e03d340a024d39cc66d2656a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:44:32 np0005592157 podman[306558]: 2026-01-22 14:44:32.029202157 +0000 UTC m=+0.151080575 container init ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:44:32 np0005592157 podman[306558]: 2026-01-22 14:44:32.042301343 +0000 UTC m=+0.164179711 container start ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:44:32 np0005592157 podman[306558]: 2026-01-22 14:44:32.047240075 +0000 UTC m=+0.169118433 container attach ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:44:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:32 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]: {
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:        "osd_id": 0,
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:        "type": "bluestore"
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]:    }
Jan 22 09:44:32 np0005592157 xenodochial_thompson[306575]: }
Jan 22 09:44:32 np0005592157 systemd[1]: libpod-ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8.scope: Deactivated successfully.
Jan 22 09:44:33 np0005592157 podman[306596]: 2026-01-22 14:44:33.033204816 +0000 UTC m=+0.030895038 container died ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:44:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2af3b6190a32ec8bfd43ff514a3e2a9581c8ba97e03d340a024d39cc66d2656a-merged.mount: Deactivated successfully.
Jan 22 09:44:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:33 np0005592157 podman[306596]: 2026-01-22 14:44:33.084315297 +0000 UTC m=+0.082005499 container remove ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:44:33 np0005592157 systemd[1]: libpod-conmon-ff6cc12ba38d2e860bda4b18d06c4f883f330ed338071ecd37b9beb566c6aaf8.scope: Deactivated successfully.
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9dac78ce-44ef-4026-bb0e-6f759757d6dc does not exist
Jan 22 09:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 57e68a52-e0a1-4905-9e26-69548cf0830c does not exist
Jan 22 09:44:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 18dcf630-9f83-4afa-89c4-570d00b1ebd7 does not exist
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:34 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:35 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:37 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:38 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:38 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:44:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:39 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:40 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 09:44:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:41 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 09:44:42 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:42.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:43 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:43 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 694 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 614 KiB/s wr, 13 op/s
Jan 22 09:44:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:44:44 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:45 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:44:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:46.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:46 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:44:47
Jan 22 09:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'volumes']
Jan 22 09:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:47.620 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:47.620 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:47.621 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:44:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:48 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:48 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 09:44:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:48.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:49 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:50 np0005592157 podman[306720]: 2026-01-22 14:44:50.32779641 +0000 UTC m=+0.060253938 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:44:50 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 09:44:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:50.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:51 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:52 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 22 09:44:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:52.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:53 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:53 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:54 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 711 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 22 09:44:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:54.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:55 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:56 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:56.424 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:56 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:44:56.425 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Jan 22 09:44:56 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:44:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:56.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:44:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:44:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:44:57 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:58 np0005592157 podman[306744]: 2026-01-22 14:44:58.359687281 +0000 UTC m=+0.093008143 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:44:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:44:58 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:58 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:44:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:44:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:44:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:45:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:00.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:00 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:01 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:01.427 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:45:01 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:45:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:02.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:02 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:04 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:45:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:45:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:04.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:05 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:06 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 09:45:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:06.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.913542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107913674, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2084, "num_deletes": 257, "total_data_size": 3004006, "memory_usage": 3060432, "flush_reason": "Manual Compaction"}
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107944616, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2933109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62633, "largest_seqno": 64716, "table_properties": {"data_size": 2924499, "index_size": 4911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21758, "raw_average_key_size": 21, "raw_value_size": 2905497, "raw_average_value_size": 2807, "num_data_blocks": 213, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092948, "oldest_key_time": 1769092948, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 31228 microseconds, and 10357 cpu microseconds.
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.944830) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2933109 bytes OK
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.944892) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.949976) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.950007) EVENT_LOG_v1 {"time_micros": 1769093107950000, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.950027) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2995173, prev total WAL file size 2995173, number of live WAL files 2.
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.951321) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2864KB)], [140(9149KB)]
Jan 22 09:45:07 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107951389, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 12301766, "oldest_snapshot_seqno": -1}
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 11685 keys, 12155080 bytes, temperature: kUnknown
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108058266, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12155080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089069, "index_size": 35690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 316633, "raw_average_key_size": 27, "raw_value_size": 11888236, "raw_average_value_size": 1017, "num_data_blocks": 1339, "num_entries": 11685, "num_filter_entries": 11685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.058564) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12155080 bytes
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.059884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 113.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 12212, records dropped: 527 output_compression: NoCompression
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.059903) EVENT_LOG_v1 {"time_micros": 1769093108059893, "job": 86, "event": "compaction_finished", "compaction_time_micros": 106974, "compaction_time_cpu_micros": 34010, "output_level": 6, "num_output_files": 1, "total_output_size": 12155080, "num_input_records": 12212, "num_output_records": 11685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108060458, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108061910, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:07.951133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.062021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.062029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.062032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.062034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:08.062036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:08 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:08.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:09 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:10 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:10.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:11.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:11 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:12 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:13 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592157 ceph-mon[74359]: Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:13 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:14 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:15 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:16.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:16 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.711216) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118711291, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 375, "num_deletes": 251, "total_data_size": 194972, "memory_usage": 202288, "flush_reason": "Manual Compaction"}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118715055, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 192034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64717, "largest_seqno": 65091, "table_properties": {"data_size": 189789, "index_size": 344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5891, "raw_average_key_size": 18, "raw_value_size": 185289, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093108, "oldest_key_time": 1769093108, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3864 microseconds, and 1391 cpu microseconds.
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.715090) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 192034 bytes OK
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.715104) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717077) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717091) EVENT_LOG_v1 {"time_micros": 1769093118717086, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 192516, prev total WAL file size 192516, number of live WAL files 2.
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717489) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(187KB)], [143(11MB)]
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118717596, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 12347114, "oldest_snapshot_seqno": -1}
Jan 22 09:45:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 11485 keys, 10714981 bytes, temperature: kUnknown
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118834226, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10714981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651386, "index_size": 33786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28741, "raw_key_size": 313246, "raw_average_key_size": 27, "raw_value_size": 10454994, "raw_average_value_size": 910, "num_data_blocks": 1254, "num_entries": 11485, "num_filter_entries": 11485, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.834568) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10714981 bytes
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.836189) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.7 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(120.1) write-amplify(55.8) OK, records in: 11996, records dropped: 511 output_compression: NoCompression
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.836205) EVENT_LOG_v1 {"time_micros": 1769093118836197, "job": 88, "event": "compaction_finished", "compaction_time_micros": 116790, "compaction_time_cpu_micros": 37465, "output_level": 6, "num_output_files": 1, "total_output_size": 10714981, "num_input_records": 11996, "num_output_records": 11485, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118836486, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118838729, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.838796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.838800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.838802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.838803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:45:18.838804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:45:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:45:19 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:20 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:20.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:21 np0005592157 podman[306832]: 2026-01-22 14:45:21.33109232 +0000 UTC m=+0.060656308 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 09:45:21 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:22 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:22.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:23 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:23 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:24 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:24.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:25.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:25 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:26 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:27.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:27 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:28 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:28 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:45:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:45:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:29.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:29 np0005592157 podman[306906]: 2026-01-22 14:45:29.386339592 +0000 UTC m=+0.121117040 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:45:29 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:31 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:31.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:32 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:33 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:33 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:34 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c802b59e-afa9-4a31-8d7a-f60489d6c199 does not exist
Jan 22 09:45:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5f8e5c6c-7457-48ec-8d53-17ba3c6237e9 does not exist
Jan 22 09:45:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6d28f378-3218-4cc8-91a9-f4d17e7e161c does not exist
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:45:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:45:36 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:45:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.296737094 +0000 UTC m=+0.042901767 container create e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:45:36 np0005592157 systemd[1]: Started libpod-conmon-e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5.scope.
Jan 22 09:45:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.281106316 +0000 UTC m=+0.027271009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.388432843 +0000 UTC m=+0.134597586 container init e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.396074933 +0000 UTC m=+0.142239636 container start e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.400471232 +0000 UTC m=+0.146635925 container attach e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:45:36 np0005592157 naughty_kalam[307224]: 167 167
Jan 22 09:45:36 np0005592157 systemd[1]: libpod-e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5.scope: Deactivated successfully.
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.402137783 +0000 UTC m=+0.148302466 container died e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:45:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4c9e137cc4a24932ed49239e3941950e68634cf399921f0c384d2a9571da8dc2-merged.mount: Deactivated successfully.
Jan 22 09:45:36 np0005592157 podman[307208]: 2026-01-22 14:45:36.445257195 +0000 UTC m=+0.191421868 container remove e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:45:36 np0005592157 systemd[1]: libpod-conmon-e2311820e9425b6985618c0a7ce09557760c7009203908fb9778087abd5b52f5.scope: Deactivated successfully.
Jan 22 09:45:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:36 np0005592157 podman[307249]: 2026-01-22 14:45:36.593682693 +0000 UTC m=+0.041379869 container create 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:45:36 np0005592157 systemd[1]: Started libpod-conmon-7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d.scope.
Jan 22 09:45:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:36 np0005592157 podman[307249]: 2026-01-22 14:45:36.575171203 +0000 UTC m=+0.022868359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:36 np0005592157 podman[307249]: 2026-01-22 14:45:36.695892783 +0000 UTC m=+0.143589959 container init 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:45:36 np0005592157 podman[307249]: 2026-01-22 14:45:36.70421781 +0000 UTC m=+0.151914976 container start 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:45:36 np0005592157 podman[307249]: 2026-01-22 14:45:36.708014224 +0000 UTC m=+0.155711410 container attach 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:45:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:37 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:37 np0005592157 zen_meitner[307264]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:45:37 np0005592157 zen_meitner[307264]: --> relative data size: 1.0
Jan 22 09:45:37 np0005592157 zen_meitner[307264]: --> All data devices are unavailable
Jan 22 09:45:37 np0005592157 systemd[1]: libpod-7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d.scope: Deactivated successfully.
Jan 22 09:45:37 np0005592157 podman[307249]: 2026-01-22 14:45:37.485048294 +0000 UTC m=+0.932745450 container died 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:45:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e0b11eb7430ecb1992fe22487792a3bc1f908ecfb24c323f6a2e625bef0e8369-merged.mount: Deactivated successfully.
Jan 22 09:45:37 np0005592157 podman[307249]: 2026-01-22 14:45:37.544379228 +0000 UTC m=+0.992076394 container remove 7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 09:45:37 np0005592157 systemd[1]: libpod-conmon-7ddf07db8f6f2c1ff126ab0aca01bde434bc6af1b8b8d6f222aa5ab43c128a2d.scope: Deactivated successfully.
Jan 22 09:45:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:38 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:38 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.289839232 +0000 UTC m=+0.062247818 container create dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:45:38 np0005592157 systemd[1]: Started libpod-conmon-dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1.scope.
Jan 22 09:45:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.268498911 +0000 UTC m=+0.040907487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.375438049 +0000 UTC m=+0.147846615 container init dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.38191824 +0000 UTC m=+0.154326796 container start dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.385971181 +0000 UTC m=+0.158379727 container attach dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:45:38 np0005592157 sleepy_yonath[307451]: 167 167
Jan 22 09:45:38 np0005592157 systemd[1]: libpod-dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1.scope: Deactivated successfully.
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.388982005 +0000 UTC m=+0.161390561 container died dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:45:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3007bbdf4577c4f632e50a88345e65a56e18e7f8fc9461912e23c87f05967053-merged.mount: Deactivated successfully.
Jan 22 09:45:38 np0005592157 podman[307435]: 2026-01-22 14:45:38.425740309 +0000 UTC m=+0.198148855 container remove dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:45:38 np0005592157 systemd[1]: libpod-conmon-dc6158da7805bd213ca846aea2e0be196e3463c6ad483915c301222a2d68c7d1.scope: Deactivated successfully.
Jan 22 09:45:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:38 np0005592157 podman[307474]: 2026-01-22 14:45:38.642904435 +0000 UTC m=+0.058550846 container create e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:45:38 np0005592157 systemd[1]: Started libpod-conmon-e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada.scope.
Jan 22 09:45:38 np0005592157 podman[307474]: 2026-01-22 14:45:38.619980396 +0000 UTC m=+0.035626787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b342b5026aeabf56d9d7c8639fdf8164c938e0ddfce75c1d6512826753a00138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b342b5026aeabf56d9d7c8639fdf8164c938e0ddfce75c1d6512826753a00138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b342b5026aeabf56d9d7c8639fdf8164c938e0ddfce75c1d6512826753a00138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b342b5026aeabf56d9d7c8639fdf8164c938e0ddfce75c1d6512826753a00138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:38 np0005592157 podman[307474]: 2026-01-22 14:45:38.752864548 +0000 UTC m=+0.168510979 container init e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:45:38 np0005592157 podman[307474]: 2026-01-22 14:45:38.761241786 +0000 UTC m=+0.176888187 container start e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:45:38 np0005592157 podman[307474]: 2026-01-22 14:45:38.765346728 +0000 UTC m=+0.180993159 container attach e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:45:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:38.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:39 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:39.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]: {
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:    "0": [
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:        {
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "devices": [
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "/dev/loop3"
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            ],
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "lv_name": "ceph_lv0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "lv_size": "7511998464",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "name": "ceph_lv0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "tags": {
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.cluster_name": "ceph",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.crush_device_class": "",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.encrypted": "0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.osd_id": "0",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.type": "block",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:                "ceph.vdo": "0"
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            },
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "type": "block",
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:            "vg_name": "ceph_vg0"
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:        }
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]:    ]
Jan 22 09:45:39 np0005592157 upbeat_germain[307491]: }
Jan 22 09:45:39 np0005592157 systemd[1]: libpod-e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada.scope: Deactivated successfully.
Jan 22 09:45:39 np0005592157 podman[307474]: 2026-01-22 14:45:39.616854458 +0000 UTC m=+1.032500869 container died e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:45:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b342b5026aeabf56d9d7c8639fdf8164c938e0ddfce75c1d6512826753a00138-merged.mount: Deactivated successfully.
Jan 22 09:45:39 np0005592157 podman[307474]: 2026-01-22 14:45:39.679862794 +0000 UTC m=+1.095509205 container remove e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_germain, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:45:39 np0005592157 systemd[1]: libpod-conmon-e05cebc8efdb37f2795bd94d068a77776642389b2cd9fc7d42397dac2f83eada.scope: Deactivated successfully.
Jan 22 09:45:40 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.31109046 +0000 UTC m=+0.051016959 container create 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:45:40 np0005592157 systemd[1]: Started libpod-conmon-3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755.scope.
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.284101069 +0000 UTC m=+0.024027668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.401055165 +0000 UTC m=+0.140981704 container init 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.407709931 +0000 UTC m=+0.147636460 container start 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:45:40 np0005592157 great_dijkstra[307669]: 167 167
Jan 22 09:45:40 np0005592157 systemd[1]: libpod-3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755.scope: Deactivated successfully.
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.415182897 +0000 UTC m=+0.155109436 container attach 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.416001367 +0000 UTC m=+0.155927906 container died 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:45:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2a2ea2c014006defa9844bde078102d26b7c4b986c0f86ccb019ffeeb2eeea9a-merged.mount: Deactivated successfully.
Jan 22 09:45:40 np0005592157 podman[307653]: 2026-01-22 14:45:40.471498476 +0000 UTC m=+0.211425015 container remove 3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:45:40 np0005592157 systemd[1]: libpod-conmon-3c01e0fce10a79305295bc936e60a2d7b20e9df0c364ac015da99a7b66b24755.scope: Deactivated successfully.
Jan 22 09:45:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:40 np0005592157 podman[307695]: 2026-01-22 14:45:40.663269982 +0000 UTC m=+0.030297184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:45:40 np0005592157 podman[307695]: 2026-01-22 14:45:40.82857629 +0000 UTC m=+0.195603412 container create 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:45:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:40.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:40 np0005592157 systemd[1]: Started libpod-conmon-7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702.scope.
Jan 22 09:45:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:45:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1d021fa79e9e06b4c5ab9c7ee452674739628de99c78e09d3697f8f1f8be77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1d021fa79e9e06b4c5ab9c7ee452674739628de99c78e09d3697f8f1f8be77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1d021fa79e9e06b4c5ab9c7ee452674739628de99c78e09d3697f8f1f8be77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1d021fa79e9e06b4c5ab9c7ee452674739628de99c78e09d3697f8f1f8be77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:45:40 np0005592157 podman[307695]: 2026-01-22 14:45:40.95979211 +0000 UTC m=+0.326819292 container init 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:45:40 np0005592157 podman[307695]: 2026-01-22 14:45:40.968060356 +0000 UTC m=+0.335087448 container start 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:45:40 np0005592157 podman[307695]: 2026-01-22 14:45:40.972804304 +0000 UTC m=+0.339831396 container attach 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:45:41 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:41 np0005592157 determined_bohr[307711]: {
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:        "osd_id": 0,
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:        "type": "bluestore"
Jan 22 09:45:41 np0005592157 determined_bohr[307711]:    }
Jan 22 09:45:41 np0005592157 determined_bohr[307711]: }
Jan 22 09:45:41 np0005592157 systemd[1]: libpod-7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702.scope: Deactivated successfully.
Jan 22 09:45:41 np0005592157 podman[307695]: 2026-01-22 14:45:41.784688018 +0000 UTC m=+1.151715110 container died 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 22 09:45:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8f1d021fa79e9e06b4c5ab9c7ee452674739628de99c78e09d3697f8f1f8be77-merged.mount: Deactivated successfully.
Jan 22 09:45:41 np0005592157 podman[307695]: 2026-01-22 14:45:41.837424798 +0000 UTC m=+1.204451890 container remove 7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:45:41 np0005592157 systemd[1]: libpod-conmon-7393fec9b044921307f5eb0a8ff88b7b70d9221374b48ffe7f16d4ab5dd89702.scope: Deactivated successfully.
Jan 22 09:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c0f07bbc-75f4-41c7-afea-395a104ca376 does not exist
Jan 22 09:45:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2a328fca-daf7-46f3-859b-73cb80e429dc does not exist
Jan 22 09:45:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3c6b05c7-084b-4356-8dd8-7a18e3aaab1d does not exist
Jan 22 09:45:42 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:42.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:43 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:43 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:44 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:44.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:45.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:45 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:46 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:45:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:47 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:45:47
Jan 22 09:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Jan 22 09:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:47.621 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:47.622 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:47.622 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:45:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:48 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:48 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:49 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:50 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:51 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592157 podman[307851]: 2026-01-22 14:45:51.829301133 +0000 UTC m=+0.063558940 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 09:45:52 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:53 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:53 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:54 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:45:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:45:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:45:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:45:55 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:56 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:45:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:45:57 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:57.704 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:45:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:57.706 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:45:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:45:57.708 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:45:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:45:58 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:58 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:45:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:59 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:46:00 np0005592157 podman[307877]: 2026-01-22 14:46:00.394043204 +0000 UTC m=+0.127223292 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:46:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:00 np0005592157 ceph-mon[74359]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:01 np0005592157 ceph-mon[74359]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592157 ceph-mon[74359]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:02 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:03 np0005592157 ceph-mon[74359]: Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:03 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:46:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:46:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:04 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:05 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:07 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:07.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:08 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:08 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:09 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:10 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:11 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:12 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:13 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:13 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:14 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:46:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:46:15 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:46:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:46:16 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:17 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:18 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:18 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:19 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:19.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:20 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:21 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:22 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:22 np0005592157 podman[307965]: 2026-01-22 14:46:22.329507754 +0000 UTC m=+0.066390771 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:46:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:23 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:23 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:24 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:24.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:25.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:25 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:26 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:27 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:28 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:28 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:29 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:30 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 09:46:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:30.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:31 np0005592157 podman[308038]: 2026-01-22 14:46:31.383842319 +0000 UTC m=+0.123740976 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:46:31 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:32 np0005592157 ceph-mon[74359]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:33 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:33 np0005592157 ceph-mon[74359]: Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 22 09:46:34 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:35 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 09:46:36 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:37 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 09:46:38 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:38 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:39 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 09:46:40 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:41 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 09:46:42 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4271fafb-7a2a-4583-b178-c012072146ec does not exist
Jan 22 09:46:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ecd6636e-4743-4679-bbfa-4500a684742b does not exist
Jan 22 09:46:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9bc5a3fe-e0c3-4036-86fb-7af9a84c59c3 does not exist
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.490173845 +0000 UTC m=+0.052972867 container create eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:46:44 np0005592157 systemd[1]: Started libpod-conmon-eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935.scope.
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.469166893 +0000 UTC m=+0.031965915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 09:46:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.58856661 +0000 UTC m=+0.151365662 container init eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.59498415 +0000 UTC m=+0.157783172 container start eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.598331633 +0000 UTC m=+0.161130685 container attach eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:46:44 np0005592157 objective_noyce[308481]: 167 167
Jan 22 09:46:44 np0005592157 systemd[1]: libpod-eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935.scope: Deactivated successfully.
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.601111022 +0000 UTC m=+0.163910084 container died eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:46:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-15752a46e6e43049f174c5e49825ce8df5b410482818cd530281d045462565fc-merged.mount: Deactivated successfully.
Jan 22 09:46:44 np0005592157 podman[308465]: 2026-01-22 14:46:44.646828278 +0000 UTC m=+0.209627290 container remove eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_noyce, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:46:44 np0005592157 systemd[1]: libpod-conmon-eaa21bb90ecee6885bfcbdf178d5154f948d8b80936bbda5360c54e3a3fdb935.scope: Deactivated successfully.
Jan 22 09:46:44 np0005592157 podman[308505]: 2026-01-22 14:46:44.800370633 +0000 UTC m=+0.048844764 container create a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:46:44 np0005592157 systemd[1]: Started libpod-conmon-a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164.scope.
Jan 22 09:46:44 np0005592157 podman[308505]: 2026-01-22 14:46:44.77647711 +0000 UTC m=+0.024951251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:44 np0005592157 podman[308505]: 2026-01-22 14:46:44.911911295 +0000 UTC m=+0.160385456 container init a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 09:46:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:46:44 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:44 np0005592157 podman[308505]: 2026-01-22 14:46:44.923510734 +0000 UTC m=+0.171984845 container start a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:46:44 np0005592157 podman[308505]: 2026-01-22 14:46:44.928126858 +0000 UTC m=+0.176601059 container attach a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:46:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:45 np0005592157 condescending_hugle[308521]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:46:45 np0005592157 condescending_hugle[308521]: --> relative data size: 1.0
Jan 22 09:46:45 np0005592157 condescending_hugle[308521]: --> All data devices are unavailable
Jan 22 09:46:45 np0005592157 systemd[1]: libpod-a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164.scope: Deactivated successfully.
Jan 22 09:46:45 np0005592157 podman[308505]: 2026-01-22 14:46:45.763308602 +0000 UTC m=+1.011782744 container died a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:46:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d4cb14b9b193d67c25561aaf8447e209a35bd63dd5ce7a8739a3499fa371cda6-merged.mount: Deactivated successfully.
Jan 22 09:46:45 np0005592157 podman[308505]: 2026-01-22 14:46:45.823408086 +0000 UTC m=+1.071882187 container remove a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:46:45 np0005592157 systemd[1]: libpod-conmon-a27dcb5cca7ebcebb94199cc91267087a25b017812aa4e9e893c41e358ded164.scope: Deactivated successfully.
Jan 22 09:46:45 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:46:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 8 op/s
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.574005817 +0000 UTC m=+0.066555065 container create fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:46:46 np0005592157 systemd[1]: Started libpod-conmon-fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934.scope.
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.544896434 +0000 UTC m=+0.037445742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.664112406 +0000 UTC m=+0.156661614 container init fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.674876484 +0000 UTC m=+0.167425692 container start fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:46:46 np0005592157 nostalgic_yalow[308706]: 167 167
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.678496374 +0000 UTC m=+0.171045582 container attach fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:46:46 np0005592157 systemd[1]: libpod-fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934.scope: Deactivated successfully.
Jan 22 09:46:46 np0005592157 conmon[308706]: conmon fb27b9d8673da665f27e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934.scope/container/memory.events
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.681053477 +0000 UTC m=+0.173602715 container died fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:46:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4acfd736a1c8458e5b2f197f8ff46025c227b0931293a96197b07b2bc83a8b9e-merged.mount: Deactivated successfully.
Jan 22 09:46:46 np0005592157 podman[308690]: 2026-01-22 14:46:46.72946094 +0000 UTC m=+0.222010148 container remove fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:46:46 np0005592157 systemd[1]: libpod-conmon-fb27b9d8673da665f27e555a884ed12aabcc933de1566a3ed851abb0c53a1934.scope: Deactivated successfully.
Jan 22 09:46:46 np0005592157 podman[308730]: 2026-01-22 14:46:46.916467077 +0000 UTC m=+0.045562113 container create 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:46:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:46 np0005592157 systemd[1]: Started libpod-conmon-2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74.scope.
Jan 22 09:46:46 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75695a4e593363861747fed4371e87efa2b9dc5ef5679e671d7ca946d2c68306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75695a4e593363861747fed4371e87efa2b9dc5ef5679e671d7ca946d2c68306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75695a4e593363861747fed4371e87efa2b9dc5ef5679e671d7ca946d2c68306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75695a4e593363861747fed4371e87efa2b9dc5ef5679e671d7ca946d2c68306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:46 np0005592157 podman[308730]: 2026-01-22 14:46:46.899092056 +0000 UTC m=+0.028187142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:47 np0005592157 podman[308730]: 2026-01-22 14:46:47.015228242 +0000 UTC m=+0.144323308 container init 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:46:47 np0005592157 podman[308730]: 2026-01-22 14:46:47.026267476 +0000 UTC m=+0.155362512 container start 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:46:47 np0005592157 podman[308730]: 2026-01-22 14:46:47.029721162 +0000 UTC m=+0.158816238 container attach 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:46:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:47.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:46:47
Jan 22 09:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root']
Jan 22 09:46:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:46:47.622 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:46:47.625 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:46:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:46:47.625 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]: {
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:    "0": [
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:        {
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "devices": [
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "/dev/loop3"
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            ],
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "lv_name": "ceph_lv0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "lv_size": "7511998464",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "name": "ceph_lv0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "tags": {
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.cluster_name": "ceph",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.crush_device_class": "",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.encrypted": "0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.osd_id": "0",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.type": "block",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:                "ceph.vdo": "0"
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            },
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "type": "block",
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:            "vg_name": "ceph_vg0"
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:        }
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]:    ]
Jan 22 09:46:47 np0005592157 admiring_tesla[308746]: }
Jan 22 09:46:47 np0005592157 systemd[1]: libpod-2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74.scope: Deactivated successfully.
Jan 22 09:46:47 np0005592157 podman[308730]: 2026-01-22 14:46:47.753724723 +0000 UTC m=+0.882819759 container died 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:46:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-75695a4e593363861747fed4371e87efa2b9dc5ef5679e671d7ca946d2c68306-merged.mount: Deactivated successfully.
Jan 22 09:46:47 np0005592157 podman[308730]: 2026-01-22 14:46:47.812087824 +0000 UTC m=+0.941182860 container remove 2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:46:47 np0005592157 systemd[1]: libpod-conmon-2f498f85672b66bc65bfd1eb28ed5260961eedc539f6fad5b6556fcf578d8e74.scope: Deactivated successfully.
Jan 22 09:46:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:47 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.393744138 +0000 UTC m=+0.038621061 container create 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:46:48 np0005592157 systemd[1]: Started libpod-conmon-56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35.scope.
Jan 22 09:46:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.377470113 +0000 UTC m=+0.022347046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.47390405 +0000 UTC m=+0.118780993 container init 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.480992866 +0000 UTC m=+0.125869789 container start 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.48476807 +0000 UTC m=+0.129645013 container attach 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:46:48 np0005592157 dazzling_wu[308926]: 167 167
Jan 22 09:46:48 np0005592157 systemd[1]: libpod-56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35.scope: Deactivated successfully.
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.487544839 +0000 UTC m=+0.132421802 container died 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:46:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dc94cca1afcf30ce4c6fd7184e7e04614de98704882fcb285f208e74ae62fb68-merged.mount: Deactivated successfully.
Jan 22 09:46:48 np0005592157 podman[308910]: 2026-01-22 14:46:48.536360272 +0000 UTC m=+0.181237235 container remove 56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:46:48 np0005592157 systemd[1]: libpod-conmon-56d36ca47b755526218039dec055f6d30527506368ab9305ed5fa2dc532adf35.scope: Deactivated successfully.
Jan 22 09:46:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:48 np0005592157 podman[308948]: 2026-01-22 14:46:48.738650679 +0000 UTC m=+0.044795235 container create e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:46:48 np0005592157 systemd[1]: Started libpod-conmon-e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805.scope.
Jan 22 09:46:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:46:48 np0005592157 podman[308948]: 2026-01-22 14:46:48.719234366 +0000 UTC m=+0.025378892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:46:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a172be68038be9e7bfbee57ff6dfdbc128ef74a3de49ffcab4634bbe26fd2a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a172be68038be9e7bfbee57ff6dfdbc128ef74a3de49ffcab4634bbe26fd2a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a172be68038be9e7bfbee57ff6dfdbc128ef74a3de49ffcab4634bbe26fd2a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a172be68038be9e7bfbee57ff6dfdbc128ef74a3de49ffcab4634bbe26fd2a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:46:48 np0005592157 podman[308948]: 2026-01-22 14:46:48.835901025 +0000 UTC m=+0.142045591 container init e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:46:48 np0005592157 podman[308948]: 2026-01-22 14:46:48.841203837 +0000 UTC m=+0.147348393 container start e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:46:48 np0005592157 podman[308948]: 2026-01-22 14:46:48.845040182 +0000 UTC m=+0.151184738 container attach e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:46:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:49.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:49 np0005592157 awesome_euler[308965]: {
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:        "osd_id": 0,
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:        "type": "bluestore"
Jan 22 09:46:49 np0005592157 awesome_euler[308965]:    }
Jan 22 09:46:49 np0005592157 awesome_euler[308965]: }
Jan 22 09:46:49 np0005592157 systemd[1]: libpod-e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805.scope: Deactivated successfully.
Jan 22 09:46:49 np0005592157 podman[309037]: 2026-01-22 14:46:49.682600615 +0000 UTC m=+0.028507109 container died e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 09:46:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9a172be68038be9e7bfbee57ff6dfdbc128ef74a3de49ffcab4634bbe26fd2a6-merged.mount: Deactivated successfully.
Jan 22 09:46:49 np0005592157 podman[309037]: 2026-01-22 14:46:49.735074159 +0000 UTC m=+0.080980613 container remove e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:46:49 np0005592157 systemd[1]: libpod-conmon-e6dacf1b3bdfff22d3dbd3c1248f28d50e213689bc78ce661325986a43990805.scope: Deactivated successfully.
Jan 22 09:46:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:46:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:46:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d68a5fe4-2cde-4241-b2a0-7c85de033acc does not exist
Jan 22 09:46:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 072ef476-f044-4bbd-a7a2-b845768e1337 does not exist
Jan 22 09:46:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b91cdb13-84c7-48b4-8ca2-22db0f32031c does not exist
Jan 22 09:46:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:50 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:50.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:51.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:51 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:52 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:53 np0005592157 podman[309104]: 2026-01-22 14:46:53.399643162 +0000 UTC m=+0.119921211 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:46:53 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:53 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:54 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:55 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:56 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000074s ======
Jan 22 09:46:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Jan 22 09:46:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:57 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:46:58 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:58 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:58.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:46:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:46:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:59.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:46:59 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 09:47:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:00.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:00 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:01 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:01.873 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:47:01 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:01.874 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:47:02 np0005592157 ceph-mon[74359]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:02 np0005592157 podman[309129]: 2026-01-22 14:47:02.408483491 +0000 UTC m=+0.128111615 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 09:47:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 09:47:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:02.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:03 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:03 np0005592157 ceph-mon[74359]: Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:03.876 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:47:04 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:47:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:47:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:47:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:47:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:05.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:05 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 09:47:06 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:07 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 09:47:08 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:08 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:09 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 09:47:10 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:11 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 09:47:12 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:13 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:13 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 09:47:14 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:47:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:47:16 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Jan 22 09:47:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:16.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:17 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:17.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:18 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:18 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:18.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:19 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:20 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:21 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:22 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:23 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:23 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:24 np0005592157 podman[309218]: 2026-01-22 14:47:24.352669975 +0000 UTC m=+0.074963994 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:47:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:24 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:25 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:27 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:28 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:29 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:30 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:31 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:32 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:33.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:33 np0005592157 podman[309291]: 2026-01-22 14:47:33.380148204 +0000 UTC m=+0.111154014 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:47:33 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:33 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:34 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:35 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:36 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:36.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:37 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:38.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:39 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:39 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:40 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:41.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:41 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:47:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:47:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:43 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:43 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:44 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:47:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:47:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:45 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:47:46 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:47:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:47:47
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Jan 22 09:47:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:47.623 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:47.624 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:47:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:47:47.624 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:47:47 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:48 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:48 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:49.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:49 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:50 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:47:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:47:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:51.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:51.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f3462c2a-1ede-4bac-afa3-cfead498ee78 does not exist
Jan 22 09:47:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5ed4748b-95e6-4a8a-80db-20829ba62368 does not exist
Jan 22 09:47:51 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 00434616-cbb7-40ab-b9ae-26076d39fd78 does not exist
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.149245333 +0000 UTC m=+0.037513353 container create 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:47:52 np0005592157 systemd[1]: Started libpod-conmon-10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f.scope.
Jan 22 09:47:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.133135213 +0000 UTC m=+0.021403253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.240900971 +0000 UTC m=+0.129169001 container init 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.248598412 +0000 UTC m=+0.136866432 container start 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.252063678 +0000 UTC m=+0.140331768 container attach 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:47:52 np0005592157 modest_wright[309666]: 167 167
Jan 22 09:47:52 np0005592157 systemd[1]: libpod-10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f.scope: Deactivated successfully.
Jan 22 09:47:52 np0005592157 conmon[309666]: conmon 10938ecd5587be82a6c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f.scope/container/memory.events
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.257726929 +0000 UTC m=+0.145994959 container died 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:47:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-097b01c290450760d3241948f87beb18b44f82e2da8cdc894ab416379372a130-merged.mount: Deactivated successfully.
Jan 22 09:47:52 np0005592157 podman[309650]: 2026-01-22 14:47:52.304367758 +0000 UTC m=+0.192635818 container remove 10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:47:52 np0005592157 systemd[1]: libpod-conmon-10938ecd5587be82a6c43808408ab2f6e831e909172d894b542f010dc34e2f3f.scope: Deactivated successfully.
Jan 22 09:47:52 np0005592157 podman[309691]: 2026-01-22 14:47:52.495771464 +0000 UTC m=+0.068167845 container create 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:47:52 np0005592157 systemd[1]: Started libpod-conmon-2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753.scope.
Jan 22 09:47:52 np0005592157 podman[309691]: 2026-01-22 14:47:52.461876582 +0000 UTC m=+0.034273023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:52 np0005592157 podman[309691]: 2026-01-22 14:47:52.599229805 +0000 UTC m=+0.171626246 container init 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:47:52 np0005592157 podman[309691]: 2026-01-22 14:47:52.611409668 +0000 UTC m=+0.183806049 container start 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:47:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:52 np0005592157 podman[309691]: 2026-01-22 14:47:52.616155046 +0000 UTC m=+0.188551487 container attach 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:47:52 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:53.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:53 np0005592157 thirsty_matsumoto[309707]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:47:53 np0005592157 thirsty_matsumoto[309707]: --> relative data size: 1.0
Jan 22 09:47:53 np0005592157 thirsty_matsumoto[309707]: --> All data devices are unavailable
Jan 22 09:47:53 np0005592157 systemd[1]: libpod-2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753.scope: Deactivated successfully.
Jan 22 09:47:53 np0005592157 podman[309691]: 2026-01-22 14:47:53.476791513 +0000 UTC m=+1.049187914 container died 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 22 09:47:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-779e4502c06a03fa68b56b4db164331f6669e3cf02d369a5aeb0ab7f29bcb626-merged.mount: Deactivated successfully.
Jan 22 09:47:53 np0005592157 podman[309691]: 2026-01-22 14:47:53.564438931 +0000 UTC m=+1.136835322 container remove 2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:47:53 np0005592157 systemd[1]: libpod-conmon-2593dbab1eaeff025c5fd81e7a42880cf728af328323138d7cf6177e3f064753.scope: Deactivated successfully.
Jan 22 09:47:53 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:53 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.278644838 +0000 UTC m=+0.047993894 container create d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:47:54 np0005592157 systemd[1]: Started libpod-conmon-d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895.scope.
Jan 22 09:47:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.256793645 +0000 UTC m=+0.026142741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.357372814 +0000 UTC m=+0.126721950 container init d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.364556403 +0000 UTC m=+0.133905449 container start d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.368210363 +0000 UTC m=+0.137559449 container attach d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:47:54 np0005592157 hardcore_jennings[309895]: 167 167
Jan 22 09:47:54 np0005592157 systemd[1]: libpod-d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895.scope: Deactivated successfully.
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.370462299 +0000 UTC m=+0.139811375 container died d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 09:47:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-acf7de11e8983468422a0ee13c1b518534bfbb21dfa8f2abe159358269d966a1-merged.mount: Deactivated successfully.
Jan 22 09:47:54 np0005592157 podman[309879]: 2026-01-22 14:47:54.416704068 +0000 UTC m=+0.186053164 container remove d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jennings, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:47:54 np0005592157 systemd[1]: libpod-conmon-d7588c0201edab2abc36145f9c641e24b967d834fb612c8878afeafb2dc52895.scope: Deactivated successfully.
Jan 22 09:47:54 np0005592157 podman[309901]: 2026-01-22 14:47:54.506320845 +0000 UTC m=+0.102497368 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:47:54 np0005592157 podman[309938]: 2026-01-22 14:47:54.60267753 +0000 UTC m=+0.043438511 container create 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:47:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:54 np0005592157 systemd[1]: Started libpod-conmon-72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52.scope.
Jan 22 09:47:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f684b55e0b659c7856fa7a91e975a94004d4032e54256c431290243338ec9798/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:54 np0005592157 podman[309938]: 2026-01-22 14:47:54.585645997 +0000 UTC m=+0.026406998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f684b55e0b659c7856fa7a91e975a94004d4032e54256c431290243338ec9798/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f684b55e0b659c7856fa7a91e975a94004d4032e54256c431290243338ec9798/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f684b55e0b659c7856fa7a91e975a94004d4032e54256c431290243338ec9798/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:54 np0005592157 podman[309938]: 2026-01-22 14:47:54.98055999 +0000 UTC m=+0.421321021 container init 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:47:54 np0005592157 podman[309938]: 2026-01-22 14:47:54.993328947 +0000 UTC m=+0.434089968 container start 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 09:47:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:55 np0005592157 podman[309938]: 2026-01-22 14:47:55.135216033 +0000 UTC m=+0.575977054 container attach 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:47:55 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:55.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]: {
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:    "0": [
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:        {
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "devices": [
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "/dev/loop3"
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            ],
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "lv_name": "ceph_lv0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "lv_size": "7511998464",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "name": "ceph_lv0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "tags": {
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.cluster_name": "ceph",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.crush_device_class": "",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.encrypted": "0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.osd_id": "0",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.type": "block",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:                "ceph.vdo": "0"
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            },
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "type": "block",
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:            "vg_name": "ceph_vg0"
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:        }
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]:    ]
Jan 22 09:47:55 np0005592157 intelligent_booth[309955]: }
Jan 22 09:47:55 np0005592157 systemd[1]: libpod-72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52.scope: Deactivated successfully.
Jan 22 09:47:55 np0005592157 podman[309938]: 2026-01-22 14:47:55.731519051 +0000 UTC m=+1.172280122 container died 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 09:47:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f684b55e0b659c7856fa7a91e975a94004d4032e54256c431290243338ec9798-merged.mount: Deactivated successfully.
Jan 22 09:47:55 np0005592157 podman[309938]: 2026-01-22 14:47:55.793686236 +0000 UTC m=+1.234447227 container remove 72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:47:55 np0005592157 systemd[1]: libpod-conmon-72eb14d9ee258ec91917572d2ffd135c3db41b5e700eb09246e3b9090bf58d52.scope: Deactivated successfully.
Jan 22 09:47:56 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.408414992 +0000 UTC m=+0.046239480 container create 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:47:56 np0005592157 systemd[1]: Started libpod-conmon-910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54.scope.
Jan 22 09:47:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.385423241 +0000 UTC m=+0.023247699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.485699833 +0000 UTC m=+0.123524371 container init 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.496266695 +0000 UTC m=+0.134091183 container start 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:47:56 np0005592157 elated_tharp[310135]: 167 167
Jan 22 09:47:56 np0005592157 systemd[1]: libpod-910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54.scope: Deactivated successfully.
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.501921456 +0000 UTC m=+0.139762814 container attach 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.502722436 +0000 UTC m=+0.140546884 container died 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:47:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3af7e8860e9f3b617269ed3040972e567c74dbe9b336de069a3fc41c1ec7df53-merged.mount: Deactivated successfully.
Jan 22 09:47:56 np0005592157 podman[310119]: 2026-01-22 14:47:56.566703636 +0000 UTC m=+0.204528094 container remove 910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tharp, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:47:56 np0005592157 systemd[1]: libpod-conmon-910f9661432ba7a29bcf0bddbb2605cc16dac80d9701cc23fda915fbd078ef54.scope: Deactivated successfully.
Jan 22 09:47:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:56 np0005592157 podman[310163]: 2026-01-22 14:47:56.858496246 +0000 UTC m=+0.113195283 container create 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 09:47:56 np0005592157 podman[310163]: 2026-01-22 14:47:56.789512022 +0000 UTC m=+0.044211039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:47:56 np0005592157 systemd[1]: Started libpod-conmon-95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654.scope.
Jan 22 09:47:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:47:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522961f6e0237df15516d35c0c6c59ca10af0d87a83f20049f5c46690e5b7fed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522961f6e0237df15516d35c0c6c59ca10af0d87a83f20049f5c46690e5b7fed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522961f6e0237df15516d35c0c6c59ca10af0d87a83f20049f5c46690e5b7fed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522961f6e0237df15516d35c0c6c59ca10af0d87a83f20049f5c46690e5b7fed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:47:56 np0005592157 podman[310163]: 2026-01-22 14:47:56.945226562 +0000 UTC m=+0.199925599 container init 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:47:56 np0005592157 podman[310163]: 2026-01-22 14:47:56.950839911 +0000 UTC m=+0.205538908 container start 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:47:56 np0005592157 podman[310163]: 2026-01-22 14:47:56.957539178 +0000 UTC m=+0.212238215 container attach 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:47:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]: {
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:        "osd_id": 0,
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:        "type": "bluestore"
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]:    }
Jan 22 09:47:57 np0005592157 busy_bhabha[310179]: }
Jan 22 09:47:57 np0005592157 systemd[1]: libpod-95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654.scope: Deactivated successfully.
Jan 22 09:47:57 np0005592157 podman[310201]: 2026-01-22 14:47:57.886772758 +0000 UTC m=+0.020012218 container died 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:47:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-522961f6e0237df15516d35c0c6c59ca10af0d87a83f20049f5c46690e5b7fed-merged.mount: Deactivated successfully.
Jan 22 09:47:57 np0005592157 podman[310201]: 2026-01-22 14:47:57.945876377 +0000 UTC m=+0.079115817 container remove 95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:47:57 np0005592157 systemd[1]: libpod-conmon-95cafb36dcd6a602310502688847233a8a700506bf05e6809119058ee0448654.scope: Deactivated successfully.
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:47:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0085002d-7293-4248-81c4-f555542653f3 does not exist
Jan 22 09:47:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7977d4f0-4c94-4689-9919-d81426a4cdd5 does not exist
Jan 22 09:47:58 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6a1b0cc4-d6b7-4358-a7a7-434a0446e1bf does not exist
Jan 22 09:47:58 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:58 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:47:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:47:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:47:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:47:59 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.875240) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280875313, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2201, "num_deletes": 251, "total_data_size": 3175237, "memory_usage": 3246352, "flush_reason": "Manual Compaction"}
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280904194, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3081552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65092, "largest_seqno": 67292, "table_properties": {"data_size": 3072496, "index_size": 5229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23031, "raw_average_key_size": 21, "raw_value_size": 3052524, "raw_average_value_size": 2823, "num_data_blocks": 226, "num_entries": 1081, "num_filter_entries": 1081, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093119, "oldest_key_time": 1769093119, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 29081 microseconds, and 16164 cpu microseconds.
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.904313) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3081552 bytes OK
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.904347) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.906800) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.906817) EVENT_LOG_v1 {"time_micros": 1769093280906812, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.906835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3165975, prev total WAL file size 3165975, number of live WAL files 2.
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.907802) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3009KB)], [146(10MB)]
Jan 22 09:48:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280907846, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13796533, "oldest_snapshot_seqno": -1}
Jan 22 09:48:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 12049 keys, 12161369 bytes, temperature: kUnknown
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281031524, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 12161369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093332, "index_size": 36820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 326707, "raw_average_key_size": 27, "raw_value_size": 11886165, "raw_average_value_size": 986, "num_data_blocks": 1377, "num_entries": 12049, "num_filter_entries": 12049, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.031833) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12161369 bytes
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.033894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.5 rd, 98.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 12566, records dropped: 517 output_compression: NoCompression
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.033922) EVENT_LOG_v1 {"time_micros": 1769093281033910, "job": 90, "event": "compaction_finished", "compaction_time_micros": 123768, "compaction_time_cpu_micros": 60550, "output_level": 6, "num_output_files": 1, "total_output_size": 12161369, "num_input_records": 12566, "num_output_records": 12049, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281035753, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281038539, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:00.907678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.038632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.038642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.038645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.038648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:48:01.038651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:48:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:01.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:48:01 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:02 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:03.224 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:03.225 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:48:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:03.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:48:03 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:03 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:04 np0005592157 podman[310269]: 2026-01-22 14:48:04.405850058 +0000 UTC m=+0.132766271 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:48:04 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:48:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:48:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:05.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:05 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:06 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:06.226 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:07 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:08 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:09 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:48:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:48:10 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:11 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:48:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:48:12 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:13 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:13 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:14 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:48:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:48:15 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:15.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:16 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:17 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:19 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:20 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:21.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:21 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:22 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:22 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:23.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:24 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:24 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:25 np0005592157 podman[310355]: 2026-01-22 14:48:25.320561425 +0000 UTC m=+0.063041798 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 09:48:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:25 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:26 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:27.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:27 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:28 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:29 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:30 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:31 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:32 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:34 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:34 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:35 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:35 np0005592157 podman[310430]: 2026-01-22 14:48:35.397982387 +0000 UTC m=+0.129640052 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:48:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:36 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:37.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:37 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:37.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:38 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:38 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:39.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:40 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:41.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:41 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:42.252 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:42 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:42.252 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:43 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:44 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:44 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:45 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:48:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:46 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:47.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:47.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:48:47
Jan 22 09:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'volumes', 'default.rgw.log', '.rgw.root', 'backups']
Jan 22 09:48:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:47.625 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:48:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:48:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:48 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:49.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:49 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:48:49.254 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:49 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:49 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:49.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:50 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:51 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:53.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:53.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:53 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:53 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:54 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:55.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:55 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:56 np0005592157 podman[310519]: 2026-01-22 14:48:56.358632691 +0000 UTC m=+0.076130243 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:48:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:56 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:57.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:57 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:48:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:48:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:48:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:48:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:48:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:59.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:48:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:01.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:01 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:49:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:49:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:01.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3f622c91-6dac-4dd2-998c-3efaaeacb4eb does not exist
Jan 22 09:49:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 63b858df-d767-4440-9492-60ab4554f122 does not exist
Jan 22 09:49:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cecdbc8f-28aa-47ef-ba58-9d5e463c3b68 does not exist
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:49:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:02 np0005592157 podman[310813]: 2026-01-22 14:49:02.75138575 +0000 UTC m=+0.024038598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.019775409 +0000 UTC m=+0.292428227 container create cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:49:03 np0005592157 systemd[1]: Started libpod-conmon-cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8.scope.
Jan 22 09:49:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:03.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.200375327 +0000 UTC m=+0.473028175 container init cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.213341819 +0000 UTC m=+0.485994647 container start cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.217923203 +0000 UTC m=+0.490599412 container attach cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:49:03 np0005592157 brave_colden[310830]: 167 167
Jan 22 09:49:03 np0005592157 systemd[1]: libpod-cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8.scope: Deactivated successfully.
Jan 22 09:49:03 np0005592157 conmon[310830]: conmon cb8fcab53fb86abd0878 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8.scope/container/memory.events
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.22464358 +0000 UTC m=+0.497296398 container died cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:49:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7b07df3899de573dc0348329bf78059f287edade9700dcebfe7fc183914afa03-merged.mount: Deactivated successfully.
Jan 22 09:49:03 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:03 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:03 np0005592157 podman[310813]: 2026-01-22 14:49:03.286379764 +0000 UTC m=+0.559032562 container remove cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:49:03 np0005592157 systemd[1]: libpod-conmon-cb8fcab53fb86abd08788f6ca2dafdb7db606533eca0127cc3755ad6bfa3b4c8.scope: Deactivated successfully.
Jan 22 09:49:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:03.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:03 np0005592157 podman[310853]: 2026-01-22 14:49:03.546141869 +0000 UTC m=+0.067878007 container create 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:03 np0005592157 systemd[1]: Started libpod-conmon-7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa.scope.
Jan 22 09:49:03 np0005592157 podman[310853]: 2026-01-22 14:49:03.511766185 +0000 UTC m=+0.033502363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:03 np0005592157 podman[310853]: 2026-01-22 14:49:03.663159147 +0000 UTC m=+0.184895335 container init 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:49:03 np0005592157 podman[310853]: 2026-01-22 14:49:03.681120474 +0000 UTC m=+0.202856602 container start 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 09:49:03 np0005592157 podman[310853]: 2026-01-22 14:49:03.686358314 +0000 UTC m=+0.208094502 container attach 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:49:04 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:04 np0005592157 laughing_brattain[310869]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:49:04 np0005592157 laughing_brattain[310869]: --> relative data size: 1.0
Jan 22 09:49:04 np0005592157 laughing_brattain[310869]: --> All data devices are unavailable
Jan 22 09:49:04 np0005592157 systemd[1]: libpod-7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa.scope: Deactivated successfully.
Jan 22 09:49:04 np0005592157 podman[310853]: 2026-01-22 14:49:04.560100706 +0000 UTC m=+1.081836804 container died 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 22 09:49:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-75a20f49d752c144c753c1e5593a4b92050aa94e0eaf10e0fdd855644de1d56b-merged.mount: Deactivated successfully.
Jan 22 09:49:04 np0005592157 podman[310853]: 2026-01-22 14:49:04.625465401 +0000 UTC m=+1.147201539 container remove 7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:49:04 np0005592157 systemd[1]: libpod-conmon-7968dd0cfedcb3356cfacebb1a7ca2ce87304f25b0a1b6145ef8d95f5de476aa.scope: Deactivated successfully.
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:49:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:49:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:05.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:05 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.451016106 +0000 UTC m=+0.051481501 container create 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:49:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:05 np0005592157 systemd[1]: Started libpod-conmon-835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870.scope.
Jan 22 09:49:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.432309911 +0000 UTC m=+0.032775326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.54050617 +0000 UTC m=+0.140971665 container init 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.555126993 +0000 UTC m=+0.155592428 container start 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.55945082 +0000 UTC m=+0.159916245 container attach 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:49:05 np0005592157 nifty_margulis[311053]: 167 167
Jan 22 09:49:05 np0005592157 systemd[1]: libpod-835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870.scope: Deactivated successfully.
Jan 22 09:49:05 np0005592157 conmon[311053]: conmon 835f26bd1a99e3c03715 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870.scope/container/memory.events
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.566044894 +0000 UTC m=+0.166510329 container died 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:49:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-00ac4b7a6cc9f33b300f5bf62fe7eedbd9b39b00f19185e7c2a3ed25641183b5-merged.mount: Deactivated successfully.
Jan 22 09:49:05 np0005592157 podman[311036]: 2026-01-22 14:49:05.626168078 +0000 UTC m=+0.226633503 container remove 835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:49:05 np0005592157 systemd[1]: libpod-conmon-835f26bd1a99e3c037153ae4aeafc1f98db2f1c235cdc02a5cf074c734454870.scope: Deactivated successfully.
Jan 22 09:49:05 np0005592157 podman[311050]: 2026-01-22 14:49:05.704575136 +0000 UTC m=+0.207058876 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:49:05 np0005592157 podman[311099]: 2026-01-22 14:49:05.939804701 +0000 UTC m=+0.080623244 container create eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:49:05 np0005592157 podman[311099]: 2026-01-22 14:49:05.909511198 +0000 UTC m=+0.050329781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:06 np0005592157 systemd[1]: Started libpod-conmon-eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4.scope.
Jan 22 09:49:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b9e2583e19521dc29dedd2f3ca006b1031abc15de2db5c75bb528b17911721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b9e2583e19521dc29dedd2f3ca006b1031abc15de2db5c75bb528b17911721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b9e2583e19521dc29dedd2f3ca006b1031abc15de2db5c75bb528b17911721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b9e2583e19521dc29dedd2f3ca006b1031abc15de2db5c75bb528b17911721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:06 np0005592157 podman[311099]: 2026-01-22 14:49:06.104838472 +0000 UTC m=+0.245657045 container init eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:49:06 np0005592157 podman[311099]: 2026-01-22 14:49:06.1111811 +0000 UTC m=+0.251999643 container start eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:49:06 np0005592157 podman[311099]: 2026-01-22 14:49:06.115574159 +0000 UTC m=+0.256392742 container attach eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:06 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]: {
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:    "0": [
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:        {
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "devices": [
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "/dev/loop3"
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            ],
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "lv_name": "ceph_lv0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "lv_size": "7511998464",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "name": "ceph_lv0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "tags": {
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.cluster_name": "ceph",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.crush_device_class": "",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.encrypted": "0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.osd_id": "0",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.type": "block",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:                "ceph.vdo": "0"
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            },
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "type": "block",
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:            "vg_name": "ceph_vg0"
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:        }
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]:    ]
Jan 22 09:49:07 np0005592157 exciting_goldwasser[311115]: }
Jan 22 09:49:07 np0005592157 systemd[1]: libpod-eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4.scope: Deactivated successfully.
Jan 22 09:49:07 np0005592157 podman[311099]: 2026-01-22 14:49:07.045271712 +0000 UTC m=+1.186090215 container died eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 09:49:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d4b9e2583e19521dc29dedd2f3ca006b1031abc15de2db5c75bb528b17911721-merged.mount: Deactivated successfully.
Jan 22 09:49:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:07 np0005592157 podman[311099]: 2026-01-22 14:49:07.119487486 +0000 UTC m=+1.260306029 container remove eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:49:07 np0005592157 systemd[1]: libpod-conmon-eb66a14ca9917e6f84e3a00e530eb7768a5557830bc656f0038c4ab5baf16be4.scope: Deactivated successfully.
Jan 22 09:49:07 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:07 np0005592157 podman[311278]: 2026-01-22 14:49:07.963826548 +0000 UTC m=+0.061817077 container create 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:49:08 np0005592157 systemd[1]: Started libpod-conmon-27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b.scope.
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:07.934299994 +0000 UTC m=+0.032290563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:08.058992323 +0000 UTC m=+0.156982892 container init 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:08.069531555 +0000 UTC m=+0.167522044 container start 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:08.074469848 +0000 UTC m=+0.172460427 container attach 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:08 np0005592157 hungry_dewdney[311294]: 167 167
Jan 22 09:49:08 np0005592157 systemd[1]: libpod-27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b.scope: Deactivated successfully.
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:08.077101293 +0000 UTC m=+0.175091792 container died 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:49:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-335f26007bab8f2f821914f6b6a5f64619ba2a6b097d092f0a352fbfdea71e9f-merged.mount: Deactivated successfully.
Jan 22 09:49:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:08 np0005592157 podman[311278]: 2026-01-22 14:49:08.125242609 +0000 UTC m=+0.223233098 container remove 27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:08 np0005592157 systemd[1]: libpod-conmon-27ff3d463b69bda3742ecc8a875fadab1f10bb80ce7dc0f0875b18afff893c8b.scope: Deactivated successfully.
Jan 22 09:49:08 np0005592157 podman[311319]: 2026-01-22 14:49:08.420366823 +0000 UTC m=+0.105253746 container create 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:49:08 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:08 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:08 np0005592157 podman[311319]: 2026-01-22 14:49:08.356617309 +0000 UTC m=+0.041504262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:49:08 np0005592157 systemd[1]: Started libpod-conmon-974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a.scope.
Jan 22 09:49:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:49:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e9885cfbf8f33a4d786ce290ad3586c7e076998ff9090442ba057bff47e54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e9885cfbf8f33a4d786ce290ad3586c7e076998ff9090442ba057bff47e54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e9885cfbf8f33a4d786ce290ad3586c7e076998ff9090442ba057bff47e54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e9885cfbf8f33a4d786ce290ad3586c7e076998ff9090442ba057bff47e54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:49:08 np0005592157 podman[311319]: 2026-01-22 14:49:08.541604176 +0000 UTC m=+0.226491139 container init 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 09:49:08 np0005592157 podman[311319]: 2026-01-22 14:49:08.559016239 +0000 UTC m=+0.243903152 container start 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:49:08 np0005592157 podman[311319]: 2026-01-22 14:49:08.56311645 +0000 UTC m=+0.248003403 container attach 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:49:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:09 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]: {
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:        "osd_id": 0,
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:        "type": "bluestore"
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]:    }
Jan 22 09:49:09 np0005592157 heuristic_bassi[311335]: }
Jan 22 09:49:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:09.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:09 np0005592157 systemd[1]: libpod-974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a.scope: Deactivated successfully.
Jan 22 09:49:09 np0005592157 podman[311319]: 2026-01-22 14:49:09.511841265 +0000 UTC m=+1.196728178 container died 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:49:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-623e9885cfbf8f33a4d786ce290ad3586c7e076998ff9090442ba057bff47e54-merged.mount: Deactivated successfully.
Jan 22 09:49:09 np0005592157 podman[311319]: 2026-01-22 14:49:09.576476541 +0000 UTC m=+1.261363404 container remove 974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:09 np0005592157 systemd[1]: libpod-conmon-974fee2b63bea0cd182a754444228fbdea1216b3978986eb42df002b11fdf28a.scope: Deactivated successfully.
Jan 22 09:49:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:49:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:49:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0d609f70-791e-497d-87c9-5bfd0c001581 does not exist
Jan 22 09:49:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af1555c8-c94f-49a7-984f-0490fae9c815 does not exist
Jan 22 09:49:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 09aa330d-5e9c-4848-87c6-b330a34d4b8f does not exist
Jan 22 09:49:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:11 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:12 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:13 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:14 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:15.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:15 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:16 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:17.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:17 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:18 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:18 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:19.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:19 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:21 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:21.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:22 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:23.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:24 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:24 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:24.414 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:49:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:24.417 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:49:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:25.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:25 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:25.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:26 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:26.419 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:49:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:49:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:49:27 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592157 podman[311480]: 2026-01-22 14:49:27.37504323 +0000 UTC m=+0.102910378 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 09:49:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:28 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:28 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:29 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:30 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:31.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:31 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:32 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:33 np0005592157 ceph-mon[74359]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:33 np0005592157 ceph-mon[74359]: Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:49:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:49:34 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:35 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:49:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:49:36 np0005592157 podman[311555]: 2026-01-22 14:49:36.366416382 +0000 UTC m=+0.103159594 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:49:36 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:37 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:37.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.163470) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378163511, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1392, "num_deletes": 258, "total_data_size": 1898455, "memory_usage": 1934128, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378179114, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1857518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67293, "largest_seqno": 68684, "table_properties": {"data_size": 1851434, "index_size": 3094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15427, "raw_average_key_size": 20, "raw_value_size": 1838085, "raw_average_value_size": 2460, "num_data_blocks": 134, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093281, "oldest_key_time": 1769093281, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 15771 microseconds, and 4832 cpu microseconds.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.179228) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1857518 bytes OK
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.179259) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181421) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181443) EVENT_LOG_v1 {"time_micros": 1769093378181436, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1892185, prev total WAL file size 1900929, number of live WAL files 2.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.182542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1813KB)], [149(11MB)]
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378182599, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 14018887, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 12265 keys, 13864887 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378291391, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 13864887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13793825, "index_size": 39269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 332779, "raw_average_key_size": 27, "raw_value_size": 13581221, "raw_average_value_size": 1107, "num_data_blocks": 1477, "num_entries": 12265, "num_filter_entries": 12265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.292128) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 13864887 bytes
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.293841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.6 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 12796, records dropped: 531 output_compression: NoCompression
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.293873) EVENT_LOG_v1 {"time_micros": 1769093378293858, "job": 92, "event": "compaction_finished", "compaction_time_micros": 109044, "compaction_time_cpu_micros": 29947, "output_level": 6, "num_output_files": 1, "total_output_size": 13864887, "num_input_records": 12796, "num_output_records": 12265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378294986, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378299740, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.182419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.299915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.299921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.299946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.299949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.299952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.300371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378300467, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20032, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378302910, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68685, "largest_seqno": 68939, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 2608 microseconds, and 1108 cpu microseconds.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.302996) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 13847 bytes OK
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.303017) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.304903) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.304979) EVENT_LOG_v1 {"time_micros": 1769093378304967, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.305009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.305478) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(13KB)], [152(13MB)]
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378305647, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 13878734, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 12016 keys, 10006574 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378397688, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10006574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9942103, "index_size": 33318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 327797, "raw_average_key_size": 27, "raw_value_size": 9738714, "raw_average_value_size": 810, "num_data_blocks": 1228, "num_entries": 12016, "num_filter_entries": 12016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.398048) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10006574 bytes
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.399776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.6 rd, 108.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(1724.9) write-amplify(722.7) OK, records in: 12520, records dropped: 504 output_compression: NoCompression
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.399795) EVENT_LOG_v1 {"time_micros": 1769093378399785, "job": 94, "event": "compaction_finished", "compaction_time_micros": 92128, "compaction_time_cpu_micros": 35863, "output_level": 6, "num_output_files": 1, "total_output_size": 10006574, "num_input_records": 12520, "num_output_records": 12016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378399882, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378402966, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.305408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.403029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.403038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.403041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.403044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:49:38.403047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:38 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:39.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:39 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:40 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:41.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:41.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:41 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:42 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:43.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:43.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:43 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:44 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:45.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:45 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:49:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:46 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:47.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:49:47
Jan 22 09:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'images']
Jan 22 09:49:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:49:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:49:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:49:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:49:47 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:48 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:49.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:49.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:49 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:49 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:50 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:51.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:51 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:52 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:53.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:53.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:53 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:53 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:54 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:55.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:49:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:55.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:49:55 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:56 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:58 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:58 np0005592157 podman[311642]: 2026-01-22 14:49:58.366497616 +0000 UTC m=+0.088394097 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:49:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:49:59 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:59 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:49:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:01 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:02 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:03 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:04 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:04 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:50:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:04.670 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:50:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:04.672 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:50:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:05 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:05.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:05.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:06 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:07 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:07 np0005592157 podman[311667]: 2026-01-22 14:50:07.436912225 +0000 UTC m=+0.162413917 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 09:50:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:50:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:07.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:50:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:08 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:08 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:09.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:09 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:09.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:10 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d38002e8-2dcd-49d9-91bc-4704f87901a7 does not exist
Jan 22 09:50:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fdf17564-31a7-4ecd-aaa7-f07e7700f328 does not exist
Jan 22 09:50:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 04a374f5-707f-4280-af0d-996aec6f6b14 does not exist
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:50:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:50:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:11.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:11 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:11.674 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.866261833 +0000 UTC m=+0.055525351 container create 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:50:11 np0005592157 systemd[1]: Started libpod-conmon-851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1.scope.
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.838619786 +0000 UTC m=+0.027883374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.966616137 +0000 UTC m=+0.155879685 container init 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.980356428 +0000 UTC m=+0.169619966 container start 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.984985663 +0000 UTC m=+0.174249211 container attach 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:50:11 np0005592157 wizardly_hellman[312035]: 167 167
Jan 22 09:50:11 np0005592157 systemd[1]: libpod-851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1.scope: Deactivated successfully.
Jan 22 09:50:11 np0005592157 podman[312019]: 2026-01-22 14:50:11.988764567 +0000 UTC m=+0.178028115 container died 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:50:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3c96c0b4b7a52e66e03e06c3ba37dfbba5558a4e9808d06111b6c99173722a2c-merged.mount: Deactivated successfully.
Jan 22 09:50:12 np0005592157 podman[312019]: 2026-01-22 14:50:12.048793209 +0000 UTC m=+0.238056757 container remove 851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:50:12 np0005592157 systemd[1]: libpod-conmon-851087b681c1aa31094aa68d4938a822a220572183479cd4839f031ce61327f1.scope: Deactivated successfully.
Jan 22 09:50:12 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:12 np0005592157 podman[312062]: 2026-01-22 14:50:12.317037785 +0000 UTC m=+0.074520493 container create 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:50:12 np0005592157 systemd[1]: Started libpod-conmon-1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353.scope.
Jan 22 09:50:12 np0005592157 podman[312062]: 2026-01-22 14:50:12.288507086 +0000 UTC m=+0.045989874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:12 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:12 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:12 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:12 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:12 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:12 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:12 np0005592157 podman[312062]: 2026-01-22 14:50:12.431347725 +0000 UTC m=+0.188830503 container init 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 09:50:12 np0005592157 podman[312062]: 2026-01-22 14:50:12.445116157 +0000 UTC m=+0.202598885 container start 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:50:12 np0005592157 podman[312062]: 2026-01-22 14:50:12.450348507 +0000 UTC m=+0.207831295 container attach 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:50:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:13 np0005592157 nifty_nightingale[312078]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:50:13 np0005592157 nifty_nightingale[312078]: --> relative data size: 1.0
Jan 22 09:50:13 np0005592157 nifty_nightingale[312078]: --> All data devices are unavailable
Jan 22 09:50:13 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:13 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:13 np0005592157 systemd[1]: libpod-1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353.scope: Deactivated successfully.
Jan 22 09:50:13 np0005592157 podman[312062]: 2026-01-22 14:50:13.327591387 +0000 UTC m=+1.085074125 container died 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:50:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4ee88e260c812789f188bda51b6aa3b7a79a8fe740af78d1ce39bf6747d4a818-merged.mount: Deactivated successfully.
Jan 22 09:50:13 np0005592157 podman[312062]: 2026-01-22 14:50:13.409719018 +0000 UTC m=+1.167201746 container remove 1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:50:13 np0005592157 systemd[1]: libpod-conmon-1b10ee9fb5c65b3807540ba9fbd564dfefd26e39a95eccf819367aef692b4353.scope: Deactivated successfully.
Jan 22 09:50:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:13.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.327087633 +0000 UTC m=+0.073597340 container create a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 09:50:14 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:14 np0005592157 systemd[1]: Started libpod-conmon-a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008.scope.
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.294242757 +0000 UTC m=+0.040752504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.420798292 +0000 UTC m=+0.167307989 container init a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.428715799 +0000 UTC m=+0.175225506 container start a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:50:14 np0005592157 sharp_chaum[312263]: 167 167
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.433242471 +0000 UTC m=+0.179752198 container attach a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 09:50:14 np0005592157 systemd[1]: libpod-a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008.scope: Deactivated successfully.
Jan 22 09:50:14 np0005592157 conmon[312263]: conmon a466056f8882eb4b7742 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008.scope/container/memory.events
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.435028356 +0000 UTC m=+0.181538063 container died a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:50:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-502883206f6976f241aa81b92be8b4c3d9655e845a83a3378bf154b88bd0fd6d-merged.mount: Deactivated successfully.
Jan 22 09:50:14 np0005592157 podman[312247]: 2026-01-22 14:50:14.490850593 +0000 UTC m=+0.237360300 container remove a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:50:14 np0005592157 systemd[1]: libpod-conmon-a466056f8882eb4b7742aa26a8f7ce28e4288c5ed55a2a0210e929b46c9b7008.scope: Deactivated successfully.
Jan 22 09:50:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:14 np0005592157 podman[312287]: 2026-01-22 14:50:14.729544124 +0000 UTC m=+0.063077958 container create 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:50:14 np0005592157 systemd[1]: Started libpod-conmon-6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d.scope.
Jan 22 09:50:14 np0005592157 podman[312287]: 2026-01-22 14:50:14.699523788 +0000 UTC m=+0.033057682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599a8e98322d6888eaf396f06a7e4b81820ccdf7e568bf75cb86dccf673608a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599a8e98322d6888eaf396f06a7e4b81820ccdf7e568bf75cb86dccf673608a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599a8e98322d6888eaf396f06a7e4b81820ccdf7e568bf75cb86dccf673608a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599a8e98322d6888eaf396f06a7e4b81820ccdf7e568bf75cb86dccf673608a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:14 np0005592157 podman[312287]: 2026-01-22 14:50:14.843405564 +0000 UTC m=+0.176939418 container init 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:50:14 np0005592157 podman[312287]: 2026-01-22 14:50:14.857492604 +0000 UTC m=+0.191026408 container start 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:50:14 np0005592157 podman[312287]: 2026-01-22 14:50:14.86098019 +0000 UTC m=+0.194514004 container attach 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:50:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:15 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:15.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]: {
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:    "0": [
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:        {
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "devices": [
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "/dev/loop3"
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            ],
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "lv_name": "ceph_lv0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "lv_size": "7511998464",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "name": "ceph_lv0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "tags": {
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.cluster_name": "ceph",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.crush_device_class": "",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.encrypted": "0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.osd_id": "0",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.type": "block",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:                "ceph.vdo": "0"
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            },
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "type": "block",
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:            "vg_name": "ceph_vg0"
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:        }
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]:    ]
Jan 22 09:50:15 np0005592157 inspiring_bartik[312304]: }
Jan 22 09:50:15 np0005592157 systemd[1]: libpod-6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d.scope: Deactivated successfully.
Jan 22 09:50:15 np0005592157 podman[312287]: 2026-01-22 14:50:15.660306684 +0000 UTC m=+0.993840558 container died 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 09:50:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-599a8e98322d6888eaf396f06a7e4b81820ccdf7e568bf75cb86dccf673608a8-merged.mount: Deactivated successfully.
Jan 22 09:50:15 np0005592157 podman[312287]: 2026-01-22 14:50:15.744455625 +0000 UTC m=+1.077989469 container remove 6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:50:15 np0005592157 systemd[1]: libpod-conmon-6e329e86ff10a4e064b272c14e328316728fbaa30a54690d845c22777696735d.scope: Deactivated successfully.
Jan 22 09:50:16 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.490209867 +0000 UTC m=+0.066943845 container create c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:50:16 np0005592157 systemd[1]: Started libpod-conmon-c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376.scope.
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.462683353 +0000 UTC m=+0.039417381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.582255194 +0000 UTC m=+0.158989232 container init c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.592661133 +0000 UTC m=+0.169395111 container start c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.597059062 +0000 UTC m=+0.173793090 container attach c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 09:50:16 np0005592157 cool_noyce[312482]: 167 167
Jan 22 09:50:16 np0005592157 systemd[1]: libpod-c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376.scope: Deactivated successfully.
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.600364304 +0000 UTC m=+0.177098282 container died c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 09:50:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b9c0e0914ba5d43d5518e760065bc9ce6888f5933ecab9a6d2651ca57cf65cb4-merged.mount: Deactivated successfully.
Jan 22 09:50:16 np0005592157 podman[312466]: 2026-01-22 14:50:16.653960206 +0000 UTC m=+0.230694184 container remove c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:50:16 np0005592157 systemd[1]: libpod-conmon-c2cf3d226d50719aff33fc7a3818d22b118b292363e0dc8001eca03663831376.scope: Deactivated successfully.
Jan 22 09:50:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:16 np0005592157 podman[312506]: 2026-01-22 14:50:16.906285316 +0000 UTC m=+0.074800280 container create e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:50:16 np0005592157 systemd[1]: Started libpod-conmon-e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef.scope.
Jan 22 09:50:16 np0005592157 podman[312506]: 2026-01-22 14:50:16.878870015 +0000 UTC m=+0.047385029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:50:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:50:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637757a2fcb4aa4ffd8b3ad702ad66268f1569532e9b19c84fa3f2732963c328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637757a2fcb4aa4ffd8b3ad702ad66268f1569532e9b19c84fa3f2732963c328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637757a2fcb4aa4ffd8b3ad702ad66268f1569532e9b19c84fa3f2732963c328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637757a2fcb4aa4ffd8b3ad702ad66268f1569532e9b19c84fa3f2732963c328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:50:17 np0005592157 podman[312506]: 2026-01-22 14:50:17.024232767 +0000 UTC m=+0.192747731 container init e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 22 09:50:17 np0005592157 podman[312506]: 2026-01-22 14:50:17.035871946 +0000 UTC m=+0.204386920 container start e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:50:17 np0005592157 podman[312506]: 2026-01-22 14:50:17.04122857 +0000 UTC m=+0.209743524 container attach e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:50:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:17 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:17.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]: {
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:        "osd_id": 0,
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:        "type": "bluestore"
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]:    }
Jan 22 09:50:17 np0005592157 hungry_feynman[312522]: }
Jan 22 09:50:17 np0005592157 systemd[1]: libpod-e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef.scope: Deactivated successfully.
Jan 22 09:50:17 np0005592157 podman[312506]: 2026-01-22 14:50:17.982582131 +0000 UTC m=+1.151097075 container died e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:50:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-637757a2fcb4aa4ffd8b3ad702ad66268f1569532e9b19c84fa3f2732963c328-merged.mount: Deactivated successfully.
Jan 22 09:50:18 np0005592157 podman[312506]: 2026-01-22 14:50:18.048768886 +0000 UTC m=+1.217283860 container remove e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:50:18 np0005592157 systemd[1]: libpod-conmon-e0404806ebc142c098c245f550923d5a1057b1ada6fc3eb21b09fc6e32350fef.scope: Deactivated successfully.
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e51157d1-036a-4b45-ad68-dc8ea9c4fd4f does not exist
Jan 22 09:50:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9632f563-7e1d-4ced-abf6-69bc82e4b7f1 does not exist
Jan 22 09:50:18 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c190228b-c890-441b-bcc9-983c78cf3168 does not exist
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:50:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:50:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:19 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:20 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:21 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:22 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:23 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:24 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:25.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:25 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:26 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:27.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:27 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:28 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:28 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:29.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:29 np0005592157 podman[312613]: 2026-01-22 14:50:29.363489955 +0000 UTC m=+0.095357621 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:50:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:50:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:50:29 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:30 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:31.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:31.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:31 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:33 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:33.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:34 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:34 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:35 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:35.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:36 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:50:37 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:37.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:38 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:38 np0005592157 podman[312691]: 2026-01-22 14:50:38.397323104 +0000 UTC m=+0.124774661 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:50:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 09:50:39 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:39 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:39.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:39.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:40 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 09:50:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:41.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:41 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:50:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:41.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:50:42 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 09:50:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:43.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:43 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:43 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:44 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 39 op/s
Jan 22 09:50:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.328574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445328832, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1057, "num_deletes": 251, "total_data_size": 1279273, "memory_usage": 1302592, "flush_reason": "Manual Compaction"}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445341210, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1258029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68940, "largest_seqno": 69996, "table_properties": {"data_size": 1253303, "index_size": 2121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12396, "raw_average_key_size": 20, "raw_value_size": 1242935, "raw_average_value_size": 2068, "num_data_blocks": 92, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 12711 microseconds, and 6514 cpu microseconds.
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341285) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1258029 bytes OK
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341311) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.343645) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.343671) EVENT_LOG_v1 {"time_micros": 1769093445343664, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.343692) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1274287, prev total WAL file size 1274287, number of live WAL files 2.
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.344640) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1228KB)], [155(9772KB)]
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445344722, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11264603, "oldest_snapshot_seqno": -1}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 12102 keys, 9648509 bytes, temperature: kUnknown
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445444557, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9648509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9583948, "index_size": 33216, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 330731, "raw_average_key_size": 27, "raw_value_size": 9379447, "raw_average_value_size": 775, "num_data_blocks": 1219, "num_entries": 12102, "num_filter_entries": 12102, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.444881) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9648509 bytes
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.446406) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.8 rd, 96.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 12617, records dropped: 515 output_compression: NoCompression
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.446423) EVENT_LOG_v1 {"time_micros": 1769093445446415, "job": 96, "event": "compaction_finished", "compaction_time_micros": 99906, "compaction_time_cpu_micros": 51956, "output_level": 6, "num_output_files": 1, "total_output_size": 9648509, "num_input_records": 12617, "num_output_records": 12102, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445446714, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445448478, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.344513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.448782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.448793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.448798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.448802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:50:45.448806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:45.449 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:45 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:45.451 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:46 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:50:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 22 09:50:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:47 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:50:47
Jan 22 09:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', 'backups', 'images', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 22 09:50:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:50:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:47.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:47.626 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:50:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:47.627 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:50:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:48 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:48 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 09:50:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:49 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:50 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 22 09:50:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:51.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:51 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:50:51.453 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:51 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 09:50:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:53.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:53 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:53 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:54 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 09:50:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:55 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 43 KiB/s wr, 3 op/s
Jan 22 09:50:56 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:57 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:50:58 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:58 np0005592157 ceph-mon[74359]: Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:50:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:59 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:00 np0005592157 podman[312778]: 2026-01-22 14:51:00.335220872 +0000 UTC m=+0.074307088 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:51:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:00 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:51:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:01.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:51:01 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:02 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:03.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:03 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009901940256839754 of space, bias 1.0, pg target 0.2930974316024567 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:51:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:04 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:51:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:05.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:51:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:05.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:05 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:06 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:07.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:07.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:07 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:08 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:08 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:09 np0005592157 podman[312801]: 2026-01-22 14:51:09.388327299 +0000 UTC m=+0.115943502 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:51:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:09.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:09 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:10 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:51:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:11.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:51:12 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:13 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:14 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:14 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:15 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:51:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:51:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:15.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:16 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:17 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:17.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:18 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:18 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8dfcf2ca-86c4-4da6-8210-5d66b1892569 does not exist
Jan 22 09:51:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 69bb4a59-274e-4159-a349-6e8d307077bf does not exist
Jan 22 09:51:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eddf8f3a-a716-4885-a852-5e36f60aa3e6 does not exist
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:51:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:51:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:19.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:20 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:51:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.330081568 +0000 UTC m=+0.063831947 container create 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:51:20 np0005592157 systemd[1]: Started libpod-conmon-295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9.scope.
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.303281782 +0000 UTC m=+0.037032201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.468429876 +0000 UTC m=+0.202180305 container init 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.477467041 +0000 UTC m=+0.211217370 container start 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.480693201 +0000 UTC m=+0.214443540 container attach 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:51:20 np0005592157 cool_burnell[313173]: 167 167
Jan 22 09:51:20 np0005592157 systemd[1]: libpod-295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9.scope: Deactivated successfully.
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.48468267 +0000 UTC m=+0.218433009 container died 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:51:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9dc30cbb84d0b2ab0658b0860d4dbd74a89ddd1c12edb981ec531f90ab77a8f6-merged.mount: Deactivated successfully.
Jan 22 09:51:20 np0005592157 podman[313157]: 2026-01-22 14:51:20.527037953 +0000 UTC m=+0.260788302 container remove 295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:51:20 np0005592157 systemd[1]: libpod-conmon-295078d92f07e6517211e0170f142af8a5df2df61187e53cd1d60d17d94fdbe9.scope: Deactivated successfully.
Jan 22 09:51:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:20 np0005592157 podman[313198]: 2026-01-22 14:51:20.739615335 +0000 UTC m=+0.062896194 container create bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:51:20 np0005592157 systemd[1]: Started libpod-conmon-bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268.scope.
Jan 22 09:51:20 np0005592157 podman[313198]: 2026-01-22 14:51:20.714704186 +0000 UTC m=+0.037985115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:20 np0005592157 podman[313198]: 2026-01-22 14:51:20.853509376 +0000 UTC m=+0.176790325 container init bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:51:20 np0005592157 podman[313198]: 2026-01-22 14:51:20.868304313 +0000 UTC m=+0.191585202 container start bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:51:20 np0005592157 podman[313198]: 2026-01-22 14:51:20.873253876 +0000 UTC m=+0.196534765 container attach bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:51:21 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:21 np0005592157 romantic_brattain[313215]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:51:21 np0005592157 romantic_brattain[313215]: --> relative data size: 1.0
Jan 22 09:51:21 np0005592157 romantic_brattain[313215]: --> All data devices are unavailable
Jan 22 09:51:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:21.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:21 np0005592157 systemd[1]: libpod-bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268.scope: Deactivated successfully.
Jan 22 09:51:21 np0005592157 podman[313198]: 2026-01-22 14:51:21.702916632 +0000 UTC m=+1.026197491 container died bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:51:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-37db9024539aa2825fbd51c15464fb1bbce1c1824eff273787948eb52db31654-merged.mount: Deactivated successfully.
Jan 22 09:51:21 np0005592157 podman[313198]: 2026-01-22 14:51:21.776742927 +0000 UTC m=+1.100023786 container remove bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 09:51:21 np0005592157 systemd[1]: libpod-conmon-bf375a41d1e1ce03ab83b7775cb6b8d09710c4dcaabbac493d33851d8b6c0268.scope: Deactivated successfully.
Jan 22 09:51:22 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.416550666 +0000 UTC m=+0.037094173 container create 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:51:22 np0005592157 systemd[1]: Started libpod-conmon-006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919.scope.
Jan 22 09:51:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.39901358 +0000 UTC m=+0.019557097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.50364849 +0000 UTC m=+0.124192017 container init 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.509603298 +0000 UTC m=+0.130146795 container start 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.513129536 +0000 UTC m=+0.133673063 container attach 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:51:22 np0005592157 infallible_lamarr[313402]: 167 167
Jan 22 09:51:22 np0005592157 systemd[1]: libpod-006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919.scope: Deactivated successfully.
Jan 22 09:51:22 np0005592157 conmon[313402]: conmon 006804e5c87c9bebbb13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919.scope/container/memory.events
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.516506 +0000 UTC m=+0.137049497 container died 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:51:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8383f3d69bd647475bc6a7d20a0bc4acea67efa7dbf40bd187dbf0e95f00d024-merged.mount: Deactivated successfully.
Jan 22 09:51:22 np0005592157 podman[313386]: 2026-01-22 14:51:22.553813157 +0000 UTC m=+0.174356684 container remove 006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:51:22 np0005592157 systemd[1]: libpod-conmon-006804e5c87c9bebbb1360ded5fcef8aa66f8cf44725da2fd183b735bb876919.scope: Deactivated successfully.
Jan 22 09:51:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:22 np0005592157 podman[313426]: 2026-01-22 14:51:22.720962021 +0000 UTC m=+0.040999250 container create 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:51:22 np0005592157 systemd[1]: Started libpod-conmon-71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701.scope.
Jan 22 09:51:22 np0005592157 podman[313426]: 2026-01-22 14:51:22.702109312 +0000 UTC m=+0.022146561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6a34b4cc7c681826ccb4115e384187ab0dff7ef2d0c4548d320ce9b12e8b9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6a34b4cc7c681826ccb4115e384187ab0dff7ef2d0c4548d320ce9b12e8b9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6a34b4cc7c681826ccb4115e384187ab0dff7ef2d0c4548d320ce9b12e8b9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6a34b4cc7c681826ccb4115e384187ab0dff7ef2d0c4548d320ce9b12e8b9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:22 np0005592157 podman[313426]: 2026-01-22 14:51:22.837964478 +0000 UTC m=+0.158001737 container init 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:51:22 np0005592157 podman[313426]: 2026-01-22 14:51:22.849254699 +0000 UTC m=+0.169291918 container start 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:51:22 np0005592157 podman[313426]: 2026-01-22 14:51:22.852905949 +0000 UTC m=+0.172943538 container attach 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:51:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:23 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:23 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]: {
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:    "0": [
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:        {
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "devices": [
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "/dev/loop3"
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            ],
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "lv_name": "ceph_lv0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "lv_size": "7511998464",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "name": "ceph_lv0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "tags": {
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.cluster_name": "ceph",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.crush_device_class": "",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.encrypted": "0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.osd_id": "0",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.type": "block",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:                "ceph.vdo": "0"
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            },
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "type": "block",
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:            "vg_name": "ceph_vg0"
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:        }
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]:    ]
Jan 22 09:51:23 np0005592157 trusting_jemison[313442]: }
Jan 22 09:51:23 np0005592157 systemd[1]: libpod-71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701.scope: Deactivated successfully.
Jan 22 09:51:23 np0005592157 podman[313426]: 2026-01-22 14:51:23.638304686 +0000 UTC m=+0.958341955 container died 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:51:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fe6a34b4cc7c681826ccb4115e384187ab0dff7ef2d0c4548d320ce9b12e8b9c-merged.mount: Deactivated successfully.
Jan 22 09:51:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:23.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:23 np0005592157 podman[313426]: 2026-01-22 14:51:23.696623036 +0000 UTC m=+1.016660275 container remove 71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:51:23 np0005592157 systemd[1]: libpod-conmon-71f8c705e05e918c5864e73eb2c2119ce554ca00f23364054157846bc24bc701.scope: Deactivated successfully.
Jan 22 09:51:24 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.351746726 +0000 UTC m=+0.032086739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.557464307 +0000 UTC m=+0.237804260 container create 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:51:24 np0005592157 systemd[1]: Started libpod-conmon-7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f.scope.
Jan 22 09:51:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.823775046 +0000 UTC m=+0.504114989 container init 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.834240786 +0000 UTC m=+0.514580699 container start 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.838328007 +0000 UTC m=+0.518667940 container attach 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:51:24 np0005592157 adoring_turing[313626]: 167 167
Jan 22 09:51:24 np0005592157 systemd[1]: libpod-7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f.scope: Deactivated successfully.
Jan 22 09:51:24 np0005592157 conmon[313626]: conmon 7e0c0c8ef5b5a7856456 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f.scope/container/memory.events
Jan 22 09:51:24 np0005592157 podman[313609]: 2026-01-22 14:51:24.843000403 +0000 UTC m=+0.523340316 container died 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:51:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8d84713199ab573205f433198dc3808b30d83927c20a3fb0f96b77d0e48feb5b-merged.mount: Deactivated successfully.
Jan 22 09:51:25 np0005592157 podman[313609]: 2026-01-22 14:51:25.052463428 +0000 UTC m=+0.732803351 container remove 7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:51:25 np0005592157 systemd[1]: libpod-conmon-7e0c0c8ef5b5a78564564adc605c86b02e78585ec44f626dad819b18376dc11f.scope: Deactivated successfully.
Jan 22 09:51:25 np0005592157 podman[313650]: 2026-01-22 14:51:25.266919248 +0000 UTC m=+0.098875789 container create 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:51:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:25.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:25 np0005592157 podman[313650]: 2026-01-22 14:51:25.197202615 +0000 UTC m=+0.029159156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:51:25 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:25 np0005592157 systemd[1]: Started libpod-conmon-6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7.scope.
Jan 22 09:51:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:51:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c66f941624d81787ffc97852ae5275e33cca836b3915cf32a92db4f4d0b086d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c66f941624d81787ffc97852ae5275e33cca836b3915cf32a92db4f4d0b086d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c66f941624d81787ffc97852ae5275e33cca836b3915cf32a92db4f4d0b086d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c66f941624d81787ffc97852ae5275e33cca836b3915cf32a92db4f4d0b086d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:51:25 np0005592157 podman[313650]: 2026-01-22 14:51:25.430900181 +0000 UTC m=+0.262856722 container init 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 09:51:25 np0005592157 podman[313650]: 2026-01-22 14:51:25.439102485 +0000 UTC m=+0.271059006 container start 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:51:25 np0005592157 podman[313650]: 2026-01-22 14:51:25.504552522 +0000 UTC m=+0.336509083 container attach 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:51:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:25.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]: {
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:        "osd_id": 0,
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:        "type": "bluestore"
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]:    }
Jan 22 09:51:26 np0005592157 nifty_antonelli[313667]: }
Jan 22 09:51:26 np0005592157 systemd[1]: libpod-6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7.scope: Deactivated successfully.
Jan 22 09:51:26 np0005592157 podman[313650]: 2026-01-22 14:51:26.317622356 +0000 UTC m=+1.149578877 container died 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:51:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4c66f941624d81787ffc97852ae5275e33cca836b3915cf32a92db4f4d0b086d-merged.mount: Deactivated successfully.
Jan 22 09:51:26 np0005592157 podman[313650]: 2026-01-22 14:51:26.370132631 +0000 UTC m=+1.202089142 container remove 6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:51:26 np0005592157 systemd[1]: libpod-conmon-6023f244f3d5bd9aeefd6a98d44e81996537a29516635c4a1189f970ecfb71d7.scope: Deactivated successfully.
Jan 22 09:51:26 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:51:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:51:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0ab83257-d5ff-4da2-9a22-f6e556c02add does not exist
Jan 22 09:51:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cc72e2bb-d572-4b3b-9af6-93788de2f81c does not exist
Jan 22 09:51:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 68948447-c3a5-4165-b589-037600e86ba9 does not exist
Jan 22 09:51:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:27.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:27.283 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:51:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:27.285 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:51:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:27.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:27 np0005592157 ceph-mon[74359]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:27 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:28 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:28 np0005592157 ceph-mon[74359]: Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:29 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:30 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:31.288 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:51:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:31.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:31 np0005592157 podman[313755]: 2026-01-22 14:51:31.351505817 +0000 UTC m=+0.076338298 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:51:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:31.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:31 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:32 np0005592157 ceph-mon[74359]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:51:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:33.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:33.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:33 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:33 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 2 op/s
Jan 22 09:51:34 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:35.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:35.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:35 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 22 09:51:36 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:37.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:37.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:38 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 714 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Jan 22 09:51:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 22 09:51:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 22 09:51:39 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:39 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 22 09:51:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:39.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:39.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:40 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:40 np0005592157 podman[313831]: 2026-01-22 14:51:40.126421789 +0000 UTC m=+0.112921647 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:51:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 09:51:41 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:41.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:41.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:42 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 09:51:43 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:43.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:44 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:44 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 65 op/s
Jan 22 09:51:45 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:46 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:51:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 22 09:51:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:47.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:51:47
Jan 22 09:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Jan 22 09:51:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:47.627 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:47.627 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:51:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:51:47.627 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:51:47 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:47.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:48 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:48 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 409 B/s wr, 18 op/s
Jan 22 09:51:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:49.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:49 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 351 B/s wr, 15 op/s
Jan 22 09:51:50 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:51.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:52 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 09:51:53 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:53.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:54 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:54 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 09:51:55 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:55.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:56 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:51:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 15K writes, 70K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1948 writes, 8878 keys, 1947 commit groups, 1.0 writes per commit group, ingest: 11.00 MB, 0.02 MB/s#012Interval WAL: 1948 writes, 1947 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.6      1.13              0.36        48    0.024       0      0       0.0       0.0#012  L6      1/0    9.20 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.4    116.4     99.8      4.41              1.69        47    0.094    415K    25K       0.0       0.0#012 Sum      1/0    9.20 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.4     92.6     94.3      5.54              2.04        95    0.058    415K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     97.3     97.3      0.87              0.35        14    0.062     86K   3626       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    116.4     99.8      4.41              1.69        47    0.094    415K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.9      1.13              0.36        47    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.51 GB write, 0.11 MB/s write, 0.50 GB read, 0.11 MB/s read, 5.5 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 55.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000535 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2958,52.75 MB,17.3526%) FilterBlock(96,1.13 MB,0.370723%) IndexBlock(96,1.50 MB,0.493647%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:51:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:57.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:57.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:57 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:51:59 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:59.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:51:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:59.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:00 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:01.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:01.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:02 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:02 np0005592157 podman[313919]: 2026-01-22 14:52:02.357458784 +0000 UTC m=+0.087302110 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:52:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:03 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592157 ceph-mon[74359]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:03.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:52:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:03.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:52:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:04 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:04 np0005592157 ceph-mon[74359]: Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:06 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:07 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:08 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:52:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:09.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:52:09 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:09.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:10 np0005592157 podman[313943]: 2026-01-22 14:52:10.407411874 +0000 UTC m=+0.141581059 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:52:10 np0005592157 ceph-mon[74359]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:11 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:11.062 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:11 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:11.063 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:11 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:12 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:12 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 22 09:52:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 22 09:52:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:13.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:14 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:15 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:15.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:16 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:18 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:18 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:19 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:20 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:21.065 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:52:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:21 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:23 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:52:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:52:24 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:24 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:26 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:52:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:52:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:27.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a421161f-5d37-4b6b-953b-04dc1b527a24 does not exist
Jan 22 09:52:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6e2c5e65-2c61-4fff-8e29-2dd3647cec36 does not exist
Jan 22 09:52:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a3ddb4d7-6a04-4087-bc2a-32d8c0dfe0e6 does not exist
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:52:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:52:28 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.735760728 +0000 UTC m=+0.077845015 container create 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:52:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:28 np0005592157 systemd[1]: Started libpod-conmon-57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132.scope.
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.705297481 +0000 UTC m=+0.047381838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.951238663 +0000 UTC m=+0.293322950 container init 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.9660187 +0000 UTC m=+0.308102987 container start 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.970730677 +0000 UTC m=+0.312814974 container attach 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:52:28 np0005592157 vigorous_visvesvaraya[314318]: 167 167
Jan 22 09:52:28 np0005592157 systemd[1]: libpod-57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132.scope: Deactivated successfully.
Jan 22 09:52:28 np0005592157 podman[314302]: 2026-01-22 14:52:28.97607902 +0000 UTC m=+0.318163367 container died 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:52:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c05690fecf8935699aa7bfddfd9fc81d365197412797ad9e5a390e789e65e5ff-merged.mount: Deactivated successfully.
Jan 22 09:52:29 np0005592157 podman[314302]: 2026-01-22 14:52:29.029524008 +0000 UTC m=+0.371608295 container remove 57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_visvesvaraya, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:52:29 np0005592157 systemd[1]: libpod-conmon-57dd82fcf20fbf7b8c10aaeb8cce19841f795812c5e3d5a8dfff9d056731d132.scope: Deactivated successfully.
Jan 22 09:52:29 np0005592157 podman[314342]: 2026-01-22 14:52:29.25452935 +0000 UTC m=+0.071611461 container create d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:52:29 np0005592157 systemd[1]: Started libpod-conmon-d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad.scope.
Jan 22 09:52:29 np0005592157 podman[314342]: 2026-01-22 14:52:29.228073102 +0000 UTC m=+0.045155253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:29 np0005592157 podman[314342]: 2026-01-22 14:52:29.355212242 +0000 UTC m=+0.172294423 container init d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 09:52:29 np0005592157 podman[314342]: 2026-01-22 14:52:29.364174764 +0000 UTC m=+0.181256915 container start d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:52:29 np0005592157 podman[314342]: 2026-01-22 14:52:29.369023085 +0000 UTC m=+0.186105296 container attach d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:52:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:29 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:29.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:30 np0005592157 pensive_solomon[314359]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:52:30 np0005592157 pensive_solomon[314359]: --> relative data size: 1.0
Jan 22 09:52:30 np0005592157 pensive_solomon[314359]: --> All data devices are unavailable
Jan 22 09:52:30 np0005592157 systemd[1]: libpod-d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad.scope: Deactivated successfully.
Jan 22 09:52:30 np0005592157 podman[314342]: 2026-01-22 14:52:30.356052942 +0000 UTC m=+1.173135083 container died d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:52:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5a8dae042314aa7020faefc16067788701a78536ca3d6f75e3545905631f5867-merged.mount: Deactivated successfully.
Jan 22 09:52:30 np0005592157 podman[314342]: 2026-01-22 14:52:30.441209258 +0000 UTC m=+1.258291409 container remove d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:52:30 np0005592157 systemd[1]: libpod-conmon-d18d7dc61c8926119889fd240e6796b3cc04259d7c7f3a0d62c9dd24eed4c3ad.scope: Deactivated successfully.
Jan 22 09:52:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:30 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.364871861 +0000 UTC m=+0.047442870 container create 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:52:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:31.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:31 np0005592157 systemd[1]: Started libpod-conmon-672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010.scope.
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.344666649 +0000 UTC m=+0.027237628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.472368382 +0000 UTC m=+0.154939431 container init 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.47874142 +0000 UTC m=+0.161312399 container start 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.482542935 +0000 UTC m=+0.165113994 container attach 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:52:31 np0005592157 objective_ganguly[314546]: 167 167
Jan 22 09:52:31 np0005592157 systemd[1]: libpod-672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010.scope: Deactivated successfully.
Jan 22 09:52:31 np0005592157 conmon[314546]: conmon 672f0aa900d957084421 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010.scope/container/memory.events
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.489640411 +0000 UTC m=+0.172211450 container died 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:52:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e6808884e788b3030a1cd167da4e6334356b2fdc22c9d477f0e426cfdb858676-merged.mount: Deactivated successfully.
Jan 22 09:52:31 np0005592157 podman[314530]: 2026-01-22 14:52:31.542116155 +0000 UTC m=+0.224687134 container remove 672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:52:31 np0005592157 systemd[1]: libpod-conmon-672f0aa900d957084421613ae0138b70d3930ba45f8433a74187486cd4b1f010.scope: Deactivated successfully.
Jan 22 09:52:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:31.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:31 np0005592157 podman[314572]: 2026-01-22 14:52:31.783798421 +0000 UTC m=+0.086034669 container create f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:52:31 np0005592157 systemd[1]: Started libpod-conmon-f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd.scope.
Jan 22 09:52:31 np0005592157 podman[314572]: 2026-01-22 14:52:31.743102 +0000 UTC m=+0.045338318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1bac55b218c4381ac6c445cab9c8cc869ffcb1517b08aa330a6d1d0b88fbf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1bac55b218c4381ac6c445cab9c8cc869ffcb1517b08aa330a6d1d0b88fbf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1bac55b218c4381ac6c445cab9c8cc869ffcb1517b08aa330a6d1d0b88fbf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1bac55b218c4381ac6c445cab9c8cc869ffcb1517b08aa330a6d1d0b88fbf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:31 np0005592157 podman[314572]: 2026-01-22 14:52:31.900259245 +0000 UTC m=+0.202495493 container init f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 09:52:31 np0005592157 podman[314572]: 2026-01-22 14:52:31.911360951 +0000 UTC m=+0.213597159 container start f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:52:31 np0005592157 podman[314572]: 2026-01-22 14:52:31.919838482 +0000 UTC m=+0.222074700 container attach f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:52:32 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592157 podman[314617]: 2026-01-22 14:52:32.576159201 +0000 UTC m=+0.104513658 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]: {
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:    "0": [
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:        {
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "devices": [
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "/dev/loop3"
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            ],
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "lv_name": "ceph_lv0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "lv_size": "7511998464",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "name": "ceph_lv0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "tags": {
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.cluster_name": "ceph",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.crush_device_class": "",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.encrypted": "0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.osd_id": "0",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.type": "block",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:                "ceph.vdo": "0"
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            },
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "type": "block",
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:            "vg_name": "ceph_vg0"
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:        }
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]:    ]
Jan 22 09:52:32 np0005592157 blissful_goldwasser[314588]: }
Jan 22 09:52:32 np0005592157 systemd[1]: libpod-f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd.scope: Deactivated successfully.
Jan 22 09:52:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:32 np0005592157 podman[314664]: 2026-01-22 14:52:32.773317521 +0000 UTC m=+0.040611721 container died f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:52:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:33 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9a1bac55b218c4381ac6c445cab9c8cc869ffcb1517b08aa330a6d1d0b88fbf5-merged.mount: Deactivated successfully.
Jan 22 09:52:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:33.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:34 np0005592157 podman[314664]: 2026-01-22 14:52:34.087711452 +0000 UTC m=+1.355005612 container remove f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 09:52:34 np0005592157 systemd[1]: libpod-conmon-f144f4364635cf1157a68380abf88045ec5a8d48bc93ffcf581eac270741a4bd.scope: Deactivated successfully.
Jan 22 09:52:34 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:34 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.046357965 +0000 UTC m=+0.119811889 container create 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:34.955584749 +0000 UTC m=+0.029038693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:35 np0005592157 systemd[1]: Started libpod-conmon-818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51.scope.
Jan 22 09:52:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.152271387 +0000 UTC m=+0.225725301 container init 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.16366073 +0000 UTC m=+0.237114634 container start 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.169535126 +0000 UTC m=+0.242989050 container attach 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:52:35 np0005592157 xenodochial_kalam[314832]: 167 167
Jan 22 09:52:35 np0005592157 systemd[1]: libpod-818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51.scope: Deactivated successfully.
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.176695614 +0000 UTC m=+0.250149548 container died 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 09:52:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6202b6484a1f84b0a770d85d43f935b325d4fd8e53f7b12fa10eff79928c4582-merged.mount: Deactivated successfully.
Jan 22 09:52:35 np0005592157 podman[314816]: 2026-01-22 14:52:35.21678547 +0000 UTC m=+0.290239364 container remove 818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 22 09:52:35 np0005592157 systemd[1]: libpod-conmon-818fa534c2d5a629f00ccc3a0ebff580b983af16cf9c188ed8c564580a12ec51.scope: Deactivated successfully.
Jan 22 09:52:35 np0005592157 podman[314856]: 2026-01-22 14:52:35.378193851 +0000 UTC m=+0.040485217 container create 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:52:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:35 np0005592157 systemd[1]: Started libpod-conmon-450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867.scope.
Jan 22 09:52:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:52:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3402f0867027d426af38206aa9b6a4cb1c681549b31364d1a5c44f0576b003/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3402f0867027d426af38206aa9b6a4cb1c681549b31364d1a5c44f0576b003/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3402f0867027d426af38206aa9b6a4cb1c681549b31364d1a5c44f0576b003/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3402f0867027d426af38206aa9b6a4cb1c681549b31364d1a5c44f0576b003/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:52:35 np0005592157 podman[314856]: 2026-01-22 14:52:35.358641035 +0000 UTC m=+0.020932431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:52:35 np0005592157 podman[314856]: 2026-01-22 14:52:35.474043833 +0000 UTC m=+0.136335259 container init 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:52:35 np0005592157 podman[314856]: 2026-01-22 14:52:35.482340619 +0000 UTC m=+0.144632005 container start 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:52:35 np0005592157 podman[314856]: 2026-01-22 14:52:35.487786854 +0000 UTC m=+0.150078240 container attach 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:52:35 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:35.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]: {
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:        "osd_id": 0,
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:        "type": "bluestore"
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]:    }
Jan 22 09:52:36 np0005592157 nostalgic_bhabha[314873]: }
Jan 22 09:52:36 np0005592157 systemd[1]: libpod-450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867.scope: Deactivated successfully.
Jan 22 09:52:36 np0005592157 podman[314856]: 2026-01-22 14:52:36.270688199 +0000 UTC m=+0.932979565 container died 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:52:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1c3402f0867027d426af38206aa9b6a4cb1c681549b31364d1a5c44f0576b003-merged.mount: Deactivated successfully.
Jan 22 09:52:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:37 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:37 np0005592157 podman[314856]: 2026-01-22 14:52:37.369243308 +0000 UTC m=+2.031534714 container remove 450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 09:52:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:37 np0005592157 systemd[1]: libpod-conmon-450c8c5935a025f17ad9a8e49adf33d66f3f4d327e2c0d01a1b1f3c02abef867.scope: Deactivated successfully.
Jan 22 09:52:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:52:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:52:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:37 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5859608f-f196-41f5-8ff0-6707a76286ed does not exist
Jan 22 09:52:37 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c7fcf019-8ee1-46d1-ae66-7de4b835f773 does not exist
Jan 22 09:52:37 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 464c9fd8-1aa6-4449-a542-0ed0e757128a does not exist
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.460573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558460671, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1545, "num_deletes": 250, "total_data_size": 2143779, "memory_usage": 2181096, "flush_reason": "Manual Compaction"}
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558585216, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2099552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69997, "largest_seqno": 71541, "table_properties": {"data_size": 2092912, "index_size": 3521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16257, "raw_average_key_size": 19, "raw_value_size": 2078219, "raw_average_value_size": 2546, "num_data_blocks": 154, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093446, "oldest_key_time": 1769093446, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 124740 microseconds, and 10117 cpu microseconds.
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.585313) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2099552 bytes OK
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.585343) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.700633) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.700708) EVENT_LOG_v1 {"time_micros": 1769093558700691, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.700749) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2136926, prev total WAL file size 2136926, number of live WAL files 2.
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.702143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2050KB)], [158(9422KB)]
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558702218, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 11748061, "oldest_snapshot_seqno": -1}
Jan 22 09:52:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 12401 keys, 10657033 bytes, temperature: kUnknown
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558980513, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 10657033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10590062, "index_size": 34858, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 339457, "raw_average_key_size": 27, "raw_value_size": 10379443, "raw_average_value_size": 836, "num_data_blocks": 1270, "num_entries": 12401, "num_filter_entries": 12401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.980782) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10657033 bytes
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.235149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.2 rd, 38.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.7) write-amplify(5.1) OK, records in: 12918, records dropped: 517 output_compression: NoCompression
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.235209) EVENT_LOG_v1 {"time_micros": 1769093559235187, "job": 98, "event": "compaction_finished", "compaction_time_micros": 278365, "compaction_time_cpu_micros": 45581, "output_level": 6, "num_output_files": 1, "total_output_size": 10657033, "num_input_records": 12918, "num_output_records": 12401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093559236318, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093559239839, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:38.702056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.239967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.239975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.239976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.239978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:52:39.239979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:39 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:41 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:41 np0005592157 podman[314960]: 2026-01-22 14:52:41.399827287 +0000 UTC m=+0.130288348 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:52:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:41.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:42 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:43 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:43.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:43.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:44 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:44 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:45 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:45.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:46 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:52:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:47 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:47.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:52:47
Jan 22 09:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Jan 22 09:52:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:47.628 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:47.628 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:52:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:47.628 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:52:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:48 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:49 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:49 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:49.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:49.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:50 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:51 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:51.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:52:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:52:52 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:53 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:53.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:53 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:53.973 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:53 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:52:53.975 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:54 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:54 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:55 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:55.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:55.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:56 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:57 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:52:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:57.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:52:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:57.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:58 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:52:59 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:59 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:52:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:00 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:01 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:01.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:02 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:03 np0005592157 podman[315048]: 2026-01-22 14:53:03.321736847 +0000 UTC m=+0.060302349 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 22 09:53:03 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:53:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:53:03 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:03.978 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:04 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:04 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:53:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:05 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:05.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:06 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:07.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:07 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:53:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:53:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:08 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:08 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:09.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:09 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:09 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:11 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:12 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:12 np0005592157 podman[315072]: 2026-01-22 14:53:12.354513679 +0000 UTC m=+0.094767576 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:53:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:13 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:13.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:14 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:14 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:15 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:15.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:15.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:16 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:17.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:18 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:19 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:19.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:20 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:21 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:21.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:22 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:23.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:23 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:23 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:23.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:24 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:25.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:25 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:25.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:26 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:27.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:27 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:27.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:28 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:28 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:29.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:29 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:30 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:31.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:31 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:33 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:33.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:34 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:34 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:34 np0005592157 podman[315210]: 2026-01-22 14:53:34.340800643 +0000 UTC m=+0.075066656 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:53:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:35 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:53:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:53:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:36 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:37 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.312358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618312404, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 920, "num_deletes": 251, "total_data_size": 1081935, "memory_usage": 1101928, "flush_reason": "Manual Compaction"}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618321469, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1064004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71542, "largest_seqno": 72461, "table_properties": {"data_size": 1059795, "index_size": 1732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10912, "raw_average_key_size": 20, "raw_value_size": 1050737, "raw_average_value_size": 1953, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093559, "oldest_key_time": 1769093559, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 9186 microseconds, and 3532 cpu microseconds.
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.321541) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1064004 bytes OK
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.321564) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323385) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323401) EVENT_LOG_v1 {"time_micros": 1769093618323396, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323419) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1077476, prev total WAL file size 1077476, number of live WAL files 2.
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.324052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1039KB)], [161(10MB)]
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618324116, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11721037, "oldest_snapshot_seqno": -1}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 12428 keys, 10129465 bytes, temperature: kUnknown
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618417100, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10129465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10062862, "index_size": 34426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 341091, "raw_average_key_size": 27, "raw_value_size": 9852150, "raw_average_value_size": 792, "num_data_blocks": 1246, "num_entries": 12428, "num_filter_entries": 12428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.417583) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129465 bytes
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.419312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.8 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(20.5) write-amplify(9.5) OK, records in: 12939, records dropped: 511 output_compression: NoCompression
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.419328) EVENT_LOG_v1 {"time_micros": 1769093618419320, "job": 100, "event": "compaction_finished", "compaction_time_micros": 93200, "compaction_time_cpu_micros": 24894, "output_level": 6, "num_output_files": 1, "total_output_size": 10129465, "num_input_records": 12939, "num_output_records": 12428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618419562, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618421088, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.421180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.421189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.421191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.421193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:53:38.421195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4c6cb3b0-af32-4374-aa03-5dab1ae717be does not exist
Jan 22 09:53:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 283052be-0623-4a5d-a037-981dc3ebbf24 does not exist
Jan 22 09:53:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af0b5de0-9287-4606-b978-3e4c2395fbdd does not exist
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:53:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:39.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.816859732 +0000 UTC m=+0.051962983 container create 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:53:39 np0005592157 systemd[1]: Started libpod-conmon-7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743.scope.
Jan 22 09:53:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.887770794 +0000 UTC m=+0.122874065 container init 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.795909071 +0000 UTC m=+0.031012372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.893688971 +0000 UTC m=+0.128792222 container start 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.89726559 +0000 UTC m=+0.132368841 container attach 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:53:39 np0005592157 systemd[1]: libpod-7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743.scope: Deactivated successfully.
Jan 22 09:53:39 np0005592157 elegant_mclaren[315521]: 167 167
Jan 22 09:53:39 np0005592157 conmon[315521]: conmon 7b71546f49975f0ab2ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743.scope/container/memory.events
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.899877025 +0000 UTC m=+0.134980296 container died 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:53:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5630d11b6d73230af1d9563572642d81d8a7dae0e2f3b107b6714599fdc33ab-merged.mount: Deactivated successfully.
Jan 22 09:53:39 np0005592157 podman[315505]: 2026-01-22 14:53:39.947504468 +0000 UTC m=+0.182607719 container remove 7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:53:39 np0005592157 systemd[1]: libpod-conmon-7b71546f49975f0ab2caa1415f5161d4555141af5b2d80ba80d380e0d6deb743.scope: Deactivated successfully.
Jan 22 09:53:40 np0005592157 podman[315546]: 2026-01-22 14:53:40.123637285 +0000 UTC m=+0.045942223 container create bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 09:53:40 np0005592157 systemd[1]: Started libpod-conmon-bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438.scope.
Jan 22 09:53:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:40 np0005592157 podman[315546]: 2026-01-22 14:53:40.101273209 +0000 UTC m=+0.023578227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:40 np0005592157 podman[315546]: 2026-01-22 14:53:40.212319979 +0000 UTC m=+0.134624907 container init bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:53:40 np0005592157 podman[315546]: 2026-01-22 14:53:40.22080446 +0000 UTC m=+0.143109388 container start bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:53:40 np0005592157 podman[315546]: 2026-01-22 14:53:40.224547253 +0000 UTC m=+0.146852191 container attach bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:53:40 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:41 np0005592157 intelligent_archimedes[315563]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:53:41 np0005592157 intelligent_archimedes[315563]: --> relative data size: 1.0
Jan 22 09:53:41 np0005592157 intelligent_archimedes[315563]: --> All data devices are unavailable
Jan 22 09:53:41 np0005592157 systemd[1]: libpod-bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438.scope: Deactivated successfully.
Jan 22 09:53:41 np0005592157 podman[315546]: 2026-01-22 14:53:41.039975646 +0000 UTC m=+0.962280614 container died bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:53:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-faa9ca1f5989aba0576c8c0d23d063d2fbb73767adbf9fe64d01ae7f98e58ba4-merged.mount: Deactivated successfully.
Jan 22 09:53:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:41.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:41 np0005592157 podman[315546]: 2026-01-22 14:53:41.541714713 +0000 UTC m=+1.464019671 container remove bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:53:41 np0005592157 systemd[1]: libpod-conmon-bb8f386a7b3bbf7a3b42dbc7ee4c48a30961b1891a5a08b29f3ebd96f3098438.scope: Deactivated successfully.
Jan 22 09:53:41 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.244639811 +0000 UTC m=+0.068211746 container create bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:53:42 np0005592157 systemd[1]: Started libpod-conmon-bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430.scope.
Jan 22 09:53:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.212062611 +0000 UTC m=+0.035634596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.31945759 +0000 UTC m=+0.143029525 container init bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.325277475 +0000 UTC m=+0.148849440 container start bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:53:42 np0005592157 elated_wilson[315748]: 167 167
Jan 22 09:53:42 np0005592157 systemd[1]: libpod-bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430.scope: Deactivated successfully.
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.448046155 +0000 UTC m=+0.271618090 container attach bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.448560848 +0000 UTC m=+0.272132793 container died bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:53:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-02ef6110db9c20ff8894b91b19d6db745d61147b4566dd06c6f2832d957c2895-merged.mount: Deactivated successfully.
Jan 22 09:53:42 np0005592157 podman[315732]: 2026-01-22 14:53:42.68808391 +0000 UTC m=+0.511655885 container remove bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:53:42 np0005592157 systemd[1]: libpod-conmon-bdcda77f5dcf412199907ce8ad0deb443d787aaa9c0c786fa0f6aaf7a9636430.scope: Deactivated successfully.
Jan 22 09:53:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:42 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:42 np0005592157 podman[315764]: 2026-01-22 14:53:42.850862597 +0000 UTC m=+0.350181245 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:42.941446536 +0000 UTC m=+0.024415168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:43.046505697 +0000 UTC m=+0.129474339 container create d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:53:43 np0005592157 systemd[1]: Started libpod-conmon-d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e.scope.
Jan 22 09:53:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802a96b71f9ffc7122740882882c350c776c2a401e61824d0d5c0867ff954cd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802a96b71f9ffc7122740882882c350c776c2a401e61824d0d5c0867ff954cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802a96b71f9ffc7122740882882c350c776c2a401e61824d0d5c0867ff954cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802a96b71f9ffc7122740882882c350c776c2a401e61824d0d5c0867ff954cd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:43.20880856 +0000 UTC m=+0.291777252 container init d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:43.219478995 +0000 UTC m=+0.302447627 container start d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:43.268866943 +0000 UTC m=+0.351835585 container attach d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:53:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]: {
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:    "0": [
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:        {
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "devices": [
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "/dev/loop3"
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            ],
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "lv_name": "ceph_lv0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "lv_size": "7511998464",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "name": "ceph_lv0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "tags": {
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.cluster_name": "ceph",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.crush_device_class": "",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.encrypted": "0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.osd_id": "0",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.type": "block",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:                "ceph.vdo": "0"
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            },
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "type": "block",
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:            "vg_name": "ceph_vg0"
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:        }
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]:    ]
Jan 22 09:53:43 np0005592157 strange_goldstine[315814]: }
Jan 22 09:53:43 np0005592157 systemd[1]: libpod-d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e.scope: Deactivated successfully.
Jan 22 09:53:43 np0005592157 podman[315798]: 2026-01-22 14:53:43.985122102 +0000 UTC m=+1.068090744 container died d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:53:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-802a96b71f9ffc7122740882882c350c776c2a401e61824d0d5c0867ff954cd1-merged.mount: Deactivated successfully.
Jan 22 09:53:44 np0005592157 podman[315798]: 2026-01-22 14:53:44.06071599 +0000 UTC m=+1.143684622 container remove d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:53:44 np0005592157 systemd[1]: libpod-conmon-d2bcdfc0d6dbc2ee4b7075b5bda6cb0b6bfcd799eb688f7e8f9d35d222b4221e.scope: Deactivated successfully.
Jan 22 09:53:44 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.012156852 +0000 UTC m=+0.069616200 container create 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:53:45 np0005592157 systemd[1]: Started libpod-conmon-77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be.scope.
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:44.982498745 +0000 UTC m=+0.039958143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.109124522 +0000 UTC m=+0.166583910 container init 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:53:45 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.126676628 +0000 UTC m=+0.184135976 container start 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.132069722 +0000 UTC m=+0.189529070 container attach 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:53:45 np0005592157 infallible_yalow[315994]: 167 167
Jan 22 09:53:45 np0005592157 systemd[1]: libpod-77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be.scope: Deactivated successfully.
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.13480393 +0000 UTC m=+0.192263298 container died 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:53:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-40f440c1b6c88c78e16e6caf9ea266c6b5177604c37a2056622a4aad8eea557b-merged.mount: Deactivated successfully.
Jan 22 09:53:45 np0005592157 podman[315978]: 2026-01-22 14:53:45.19072708 +0000 UTC m=+0.248186428 container remove 77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:53:45 np0005592157 systemd[1]: libpod-conmon-77093bc4220f8f357742ecee7731a12fe381842e324547e150afb0309e5327be.scope: Deactivated successfully.
Jan 22 09:53:45 np0005592157 podman[316018]: 2026-01-22 14:53:45.431526904 +0000 UTC m=+0.059998272 container create 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 09:53:45 np0005592157 systemd[1]: Started libpod-conmon-6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0.scope.
Jan 22 09:53:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:45.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:45 np0005592157 podman[316018]: 2026-01-22 14:53:45.403784954 +0000 UTC m=+0.032256352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:53:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:53:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80da2344e6c6480d7f5cdb85547de63ce862a45eb8d7123f4a3c68b6008403a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80da2344e6c6480d7f5cdb85547de63ce862a45eb8d7123f4a3c68b6008403a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80da2344e6c6480d7f5cdb85547de63ce862a45eb8d7123f4a3c68b6008403a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80da2344e6c6480d7f5cdb85547de63ce862a45eb8d7123f4a3c68b6008403a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:53:45 np0005592157 podman[316018]: 2026-01-22 14:53:45.535629561 +0000 UTC m=+0.164100939 container init 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:53:45 np0005592157 podman[316018]: 2026-01-22 14:53:45.550388207 +0000 UTC m=+0.178859565 container start 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:53:45 np0005592157 podman[316018]: 2026-01-22 14:53:45.553341841 +0000 UTC m=+0.181813219 container attach 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:53:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:46 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:46 np0005592157 busy_faraday[316034]: {
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:        "osd_id": 0,
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:        "type": "bluestore"
Jan 22 09:53:46 np0005592157 busy_faraday[316034]:    }
Jan 22 09:53:46 np0005592157 busy_faraday[316034]: }
Jan 22 09:53:46 np0005592157 systemd[1]: libpod-6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0.scope: Deactivated successfully.
Jan 22 09:53:46 np0005592157 podman[316018]: 2026-01-22 14:53:46.386209767 +0000 UTC m=+1.014681135 container died 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:53:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-80da2344e6c6480d7f5cdb85547de63ce862a45eb8d7123f4a3c68b6008403a4-merged.mount: Deactivated successfully.
Jan 22 09:53:46 np0005592157 podman[316018]: 2026-01-22 14:53:46.454989777 +0000 UTC m=+1.083461155 container remove 6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:53:46 np0005592157 systemd[1]: libpod-conmon-6bcba9a0e6249c8c2080cf7dd7e4a61d46605012879c9d665f3808cf72ceb3c0.scope: Deactivated successfully.
Jan 22 09:53:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:53:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:53:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1b8d4f1b-4716-4c45-a1ea-4869793df85c does not exist
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d4330612-7ee9-4a3c-a82e-0f0a4d854f09 does not exist
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5289d769-ed0f-40d9-a14f-b5056a12a245 does not exist
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:53:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:53:47
Jan 22 09:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Jan 22 09:53:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:53:47 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:47.629 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:47.630 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:53:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:47.630 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:53:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:47.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:48 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:48 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:49 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:49.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:50.104 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:53:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:50.106 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:53:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:50 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:51.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:52 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:53 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:54 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:53:54.108 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:54 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:54 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:55 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:56 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:57 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:53:59 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:59 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:53:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:53:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:53:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:59.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:00 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:01.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:01 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:03 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:04 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:54:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:05 np0005592157 podman[316179]: 2026-01-22 14:54:05.359185228 +0000 UTC m=+0.081345603 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:54:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:05 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:06 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:07 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:09 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:09 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:09.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:10 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:11 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:11.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:12 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:13 np0005592157 podman[316203]: 2026-01-22 14:54:13.346189575 +0000 UTC m=+0.085868125 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.558453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653559117, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 648, "num_deletes": 256, "total_data_size": 666584, "memory_usage": 679544, "flush_reason": "Manual Compaction"}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653586018, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 656855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72462, "largest_seqno": 73109, "table_properties": {"data_size": 653574, "index_size": 1124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8446, "raw_average_key_size": 19, "raw_value_size": 646531, "raw_average_value_size": 1493, "num_data_blocks": 48, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093619, "oldest_key_time": 1769093619, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 27646 microseconds, and 3572 cpu microseconds.
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.586095) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 656855 bytes OK
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.586123) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.589952) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.589968) EVENT_LOG_v1 {"time_micros": 1769093653589963, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.589991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 663076, prev total WAL file size 663076, number of live WAL files 2.
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.590704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(641KB)], [164(9892KB)]
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653590819, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10786320, "oldest_snapshot_seqno": -1}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 12337 keys, 10642878 bytes, temperature: kUnknown
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653741102, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10642878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10576096, "index_size": 34861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 340392, "raw_average_key_size": 27, "raw_value_size": 10366139, "raw_average_value_size": 840, "num_data_blocks": 1261, "num_entries": 12337, "num_filter_entries": 12337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.741541) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10642878 bytes
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.886226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.7 rd, 70.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(32.6) write-amplify(16.2) OK, records in: 12861, records dropped: 524 output_compression: NoCompression
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.886330) EVENT_LOG_v1 {"time_micros": 1769093653886294, "job": 102, "event": "compaction_finished", "compaction_time_micros": 150413, "compaction_time_cpu_micros": 38435, "output_level": 6, "num_output_files": 1, "total_output_size": 10642878, "num_input_records": 12861, "num_output_records": 12337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653887160, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653891190, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.590508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.891377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.891390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.891393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.891396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:54:13.891399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:14 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:15 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:17 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:18 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:19 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:19 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:21 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:21.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:22 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:23.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:23 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:24 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:24 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:25.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:25 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:54:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:25.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:54:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:27 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:27.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:28 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:29 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:29 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:29.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:29.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:30 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:31 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:54:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 48K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 14K writes, 4534 syncs, 3.24 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1311 writes, 2485 keys, 1311 commit groups, 1.0 writes per commit group, ingest: 0.95 MB, 0.00 MB/s#012Interval WAL: 1311 writes, 608 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:54:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:31.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:32 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:33.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:33 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:34 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:34 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:35 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:35.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:36 np0005592157 podman[316345]: 2026-01-22 14:54:36.364038619 +0000 UTC m=+0.088614003 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:54:36 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 09:54:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:37.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:37 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:37.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:38 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:38 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 09:54:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:39.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:39 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:39.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:40 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 09:54:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:54:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:54:42 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 09:54:43 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:43.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:44 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:44 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:44 np0005592157 podman[316373]: 2026-01-22 14:54:44.385876891 +0000 UTC m=+0.109759819 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:54:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 09:54:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 691 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 394 KiB/s wr, 22 op/s
Jan 22 09:54:45 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:45.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:46 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:54:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 22 09:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:54:47
Jan 22 09:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Jan 22 09:54:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:54:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:47 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:47.631 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:54:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:54:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:47.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:47 np0005592157 podman[316576]: 2026-01-22 14:54:47.994379239 +0000 UTC m=+0.076401249 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:54:48 np0005592157 podman[316576]: 2026-01-22 14:54:48.111391407 +0000 UTC m=+0.193413387 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:54:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 701 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Jan 22 09:54:48 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:48 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:49 np0005592157 podman[316731]: 2026-01-22 14:54:49.036409704 +0000 UTC m=+0.092501900 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:54:49 np0005592157 podman[316731]: 2026-01-22 14:54:49.069806024 +0000 UTC m=+0.125898190 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 09:54:49 np0005592157 podman[316798]: 2026-01-22 14:54:49.362618949 +0000 UTC m=+0.079986868 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph)
Jan 22 09:54:49 np0005592157 podman[316798]: 2026-01-22 14:54:49.407504095 +0000 UTC m=+0.124871994 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, version=2.2.4, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:54:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:49.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 28406c17-242b-4c7a-93e0-2386e89c31e8 does not exist
Jan 22 09:54:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3915c6d0-1905-453a-b888-f569e8d07c98 does not exist
Jan 22 09:54:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1d7a0ea2-2c89-4cfd-a34a-0edd119f05cb does not exist
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.224266221 +0000 UTC m=+0.070681837 container create 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:54:51 np0005592157 systemd[1]: Started libpod-conmon-28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668.scope.
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.193022405 +0000 UTC m=+0.039438071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.327147478 +0000 UTC m=+0.173563084 container init 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.335830783 +0000 UTC m=+0.182246369 container start 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:54:51 np0005592157 stoic_rubin[317119]: 167 167
Jan 22 09:54:51 np0005592157 systemd[1]: libpod-28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668.scope: Deactivated successfully.
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.339783042 +0000 UTC m=+0.186198658 container attach 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 09:54:51 np0005592157 conmon[317119]: conmon 28e9117b55d95a4afc9e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668.scope/container/memory.events
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.341659598 +0000 UTC m=+0.188075174 container died 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:54:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e5e6c40b07f336de7e05aceccf6b6eb3d62f8b58ac5bde8cb7df0e0c1bf9d6b3-merged.mount: Deactivated successfully.
Jan 22 09:54:51 np0005592157 podman[317102]: 2026-01-22 14:54:51.394529372 +0000 UTC m=+0.240944988 container remove 28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:54:51 np0005592157 systemd[1]: libpod-conmon-28e9117b55d95a4afc9e85ca7fc323b760a556987ceb71c0627d477e8a4cc668.scope: Deactivated successfully.
Jan 22 09:54:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:51.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:51 np0005592157 podman[317142]: 2026-01-22 14:54:51.587894447 +0000 UTC m=+0.059597442 container create d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 09:54:51 np0005592157 systemd[1]: Started libpod-conmon-d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38.scope.
Jan 22 09:54:51 np0005592157 podman[317142]: 2026-01-22 14:54:51.55742171 +0000 UTC m=+0.029124745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:51 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:51 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:51 np0005592157 podman[317142]: 2026-01-22 14:54:51.692184049 +0000 UTC m=+0.163887114 container init d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 09:54:51 np0005592157 podman[317142]: 2026-01-22 14:54:51.705760526 +0000 UTC m=+0.177463481 container start d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:54:51 np0005592157 podman[317142]: 2026-01-22 14:54:51.709822287 +0000 UTC m=+0.181525252 container attach d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:54:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:52 np0005592157 optimistic_buck[317160]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:54:52 np0005592157 optimistic_buck[317160]: --> relative data size: 1.0
Jan 22 09:54:52 np0005592157 optimistic_buck[317160]: --> All data devices are unavailable
Jan 22 09:54:52 np0005592157 systemd[1]: libpod-d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38.scope: Deactivated successfully.
Jan 22 09:54:52 np0005592157 podman[317142]: 2026-01-22 14:54:52.568654329 +0000 UTC m=+1.040357324 container died d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:54:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8f1bc24257b34ffc161767af53096636f0a4ccb344764d7bc687fcc74cc8e791-merged.mount: Deactivated successfully.
Jan 22 09:54:52 np0005592157 podman[317142]: 2026-01-22 14:54:52.636042214 +0000 UTC m=+1.107745179 container remove d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:54:52 np0005592157 systemd[1]: libpod-conmon-d7980eb0b5a97dead86141109a612200b5f63b9950e8f70deb16bdb4160e5c38.scope: Deactivated successfully.
Jan 22 09:54:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 09:54:52 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.531526605 +0000 UTC m=+0.077777753 container create e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 09:54:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:53 np0005592157 systemd[1]: Started libpod-conmon-e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9.scope.
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.496572277 +0000 UTC m=+0.042823465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.628540696 +0000 UTC m=+0.174791834 container init e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.642140024 +0000 UTC m=+0.188391132 container start e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.64599786 +0000 UTC m=+0.192248998 container attach e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:54:53 np0005592157 systemd[1]: libpod-e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9.scope: Deactivated successfully.
Jan 22 09:54:53 np0005592157 nervous_bohr[317348]: 167 167
Jan 22 09:54:53 np0005592157 conmon[317348]: conmon e1a051976f0e5e8e198b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9.scope/container/memory.events
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.652257586 +0000 UTC m=+0.198508724 container died e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:54:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-392437b792c1b3a85c5ea876e8a24692a37ae456721b6f0e33edfbccd3381c4c-merged.mount: Deactivated successfully.
Jan 22 09:54:53 np0005592157 podman[317332]: 2026-01-22 14:54:53.699089719 +0000 UTC m=+0.245340847 container remove e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:54:53 np0005592157 systemd[1]: libpod-conmon-e1a051976f0e5e8e198b04815a8fab5d758eab562a3fed3c95ce5a39760296c9.scope: Deactivated successfully.
Jan 22 09:54:53 np0005592157 podman[317421]: 2026-01-22 14:54:53.955641325 +0000 UTC m=+0.078259376 container create 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:54:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:53.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:53.923380203 +0000 UTC m=+0.045998344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:54 np0005592157 systemd[1]: Started libpod-conmon-2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25.scope.
Jan 22 09:54:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7a9645d9d0dd50fdefb9cc621f35b76cffc30af4085a6a931cf3dc43f58045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7a9645d9d0dd50fdefb9cc621f35b76cffc30af4085a6a931cf3dc43f58045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7a9645d9d0dd50fdefb9cc621f35b76cffc30af4085a6a931cf3dc43f58045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7a9645d9d0dd50fdefb9cc621f35b76cffc30af4085a6a931cf3dc43f58045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:54.086022025 +0000 UTC m=+0.208640086 container init 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:54.10315162 +0000 UTC m=+0.225769711 container start 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:54.107887978 +0000 UTC m=+0.230506029 container attach 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:54:54 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:54 np0005592157 silly_clarke[317438]: {
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:    "0": [
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:        {
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "devices": [
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "/dev/loop3"
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            ],
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "lv_name": "ceph_lv0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "lv_size": "7511998464",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "name": "ceph_lv0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "tags": {
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.cluster_name": "ceph",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.crush_device_class": "",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.encrypted": "0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.osd_id": "0",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.type": "block",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:                "ceph.vdo": "0"
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            },
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "type": "block",
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:            "vg_name": "ceph_vg0"
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:        }
Jan 22 09:54:54 np0005592157 silly_clarke[317438]:    ]
Jan 22 09:54:54 np0005592157 silly_clarke[317438]: }
Jan 22 09:54:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 09:54:54 np0005592157 systemd[1]: libpod-2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25.scope: Deactivated successfully.
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:54.847111118 +0000 UTC m=+0.969729199 container died 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:54:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ab7a9645d9d0dd50fdefb9cc621f35b76cffc30af4085a6a931cf3dc43f58045-merged.mount: Deactivated successfully.
Jan 22 09:54:54 np0005592157 podman[317421]: 2026-01-22 14:54:54.928817738 +0000 UTC m=+1.051435819 container remove 2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 09:54:54 np0005592157 systemd[1]: libpod-conmon-2669423fb0317105a1201ae3f9d20cb7c2c6c675b079e64ce8e7b8503e4b6b25.scope: Deactivated successfully.
Jan 22 09:54:55 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:54:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:55.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.768824812 +0000 UTC m=+0.052426684 container create 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:54:55 np0005592157 systemd[1]: Started libpod-conmon-529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330.scope.
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.742675312 +0000 UTC m=+0.026277164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.892438684 +0000 UTC m=+0.176040546 container init 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.903329235 +0000 UTC m=+0.186931067 container start 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.907108339 +0000 UTC m=+0.190710171 container attach 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:54:55 np0005592157 pensive_shirley[317621]: 167 167
Jan 22 09:54:55 np0005592157 systemd[1]: libpod-529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330.scope: Deactivated successfully.
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.912565684 +0000 UTC m=+0.196167556 container died 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 22 09:54:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-69566689d80bfbd1e4bd8fc7848506fb6c25e43116248c6399d1432f6b0dae73-merged.mount: Deactivated successfully.
Jan 22 09:54:55 np0005592157 podman[317604]: 2026-01-22 14:54:55.958967337 +0000 UTC m=+0.242569169 container remove 529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:54:55 np0005592157 systemd[1]: libpod-conmon-529a7eed6fc5ede75a47dc72ae5a1b26df16f978a922c82cbb07c15a29218330.scope: Deactivated successfully.
Jan 22 09:54:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:56 np0005592157 podman[317645]: 2026-01-22 14:54:56.225639804 +0000 UTC m=+0.075326803 container create 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:54:56 np0005592157 systemd[1]: Started libpod-conmon-50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d.scope.
Jan 22 09:54:56 np0005592157 podman[317645]: 2026-01-22 14:54:56.191033224 +0000 UTC m=+0.040720323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:54:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:54:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefde5ec430bec6f6e70265b9dd6812608d9a1fa310c23d3c679973e20db00d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefde5ec430bec6f6e70265b9dd6812608d9a1fa310c23d3c679973e20db00d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefde5ec430bec6f6e70265b9dd6812608d9a1fa310c23d3c679973e20db00d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefde5ec430bec6f6e70265b9dd6812608d9a1fa310c23d3c679973e20db00d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:54:56 np0005592157 podman[317645]: 2026-01-22 14:54:56.308728159 +0000 UTC m=+0.158415208 container init 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:54:56 np0005592157 podman[317645]: 2026-01-22 14:54:56.314389339 +0000 UTC m=+0.164076338 container start 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:54:56 np0005592157 podman[317645]: 2026-01-22 14:54:56.31803299 +0000 UTC m=+0.167719999 container attach 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 09:54:56 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 09:54:57 np0005592157 crazy_raman[317661]: {
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:        "osd_id": 0,
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:        "type": "bluestore"
Jan 22 09:54:57 np0005592157 crazy_raman[317661]:    }
Jan 22 09:54:57 np0005592157 crazy_raman[317661]: }
Jan 22 09:54:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:57.186 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:54:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:57.190 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:54:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:54:57.193 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:54:57 np0005592157 systemd[1]: libpod-50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d.scope: Deactivated successfully.
Jan 22 09:54:57 np0005592157 podman[317645]: 2026-01-22 14:54:57.202018096 +0000 UTC m=+1.051705095 container died 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:54:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-efefde5ec430bec6f6e70265b9dd6812608d9a1fa310c23d3c679973e20db00d-merged.mount: Deactivated successfully.
Jan 22 09:54:57 np0005592157 podman[317645]: 2026-01-22 14:54:57.368840462 +0000 UTC m=+1.218527471 container remove 50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:54:57 np0005592157 systemd[1]: libpod-conmon-50011cdd44daf4886df7500309a9eae4114316e1ad29891a39275e937a613a8d.scope: Deactivated successfully.
Jan 22 09:54:57 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:54:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:54:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7f31d7ce-ea88-434f-b03c-941e46714acd does not exist
Jan 22 09:54:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev aefcdda8-4c6c-4e40-a216-0dabb18a4ec7 does not exist
Jan 22 09:54:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6d1c8eae-2f09-4915-b27e-ad8fa731474c does not exist
Jan 22 09:54:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:57.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:57 np0005592157 auditd[703]: Audit daemon rotating log files
Jan 22 09:54:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:57.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:54:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:59 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:59 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:54:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:59.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:01 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:01.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:02 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:03 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:55:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:55:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:55:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:55:04 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:04 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:55:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:05 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:06 np0005592157 podman[317755]: 2026-01-22 14:55:06.577754689 +0000 UTC m=+0.061012157 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:55:06 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:07 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:55:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:55:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:08 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:09 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:10 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:11 np0005592157 ceph-mon[74359]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:12 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:13 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:13 np0005592157 ceph-mon[74359]: Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:14.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:14 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:15 np0005592157 podman[317838]: 2026-01-22 14:55:15.390782659 +0000 UTC m=+0.132132845 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:55:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:15 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:16.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:16 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:17 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:19 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:20.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:20 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:21 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:22.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:22 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:23 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:24 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:25 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:26 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:28.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:28 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:29 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:29 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:30 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:31 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:32 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:33 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:34 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:35 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:36.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:37 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:37 np0005592157 podman[317934]: 2026-01-22 14:55:37.352679627 +0000 UTC m=+0.075231061 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:55:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:38 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:55:39 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:39 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:55:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:55:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:55:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:55:40 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:55:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:41 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:41.937 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:55:41 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:41.938 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:55:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:42.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 09:55:42 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:44 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 09:55:44 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:44.941 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:55:45 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:46 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592157 podman[317958]: 2026-01-22 14:55:46.375222914 +0000 UTC m=+0.102834586 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:55:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 09:55:47 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:55:47
Jan 22 09:55:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:55:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:55:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr']
Jan 22 09:55:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:55:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:47.632 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:55:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:47.633 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:55:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:55:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:55:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:48.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:48 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 09:55:49 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:49 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:55:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:55:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:50 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 09:55:51 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:52 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 09:55:53 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:55:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:55:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:54.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:54 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:54 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 340 B/s wr, 1 op/s
Jan 22 09:55:55 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:56.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:56 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:55:57 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:58 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:55:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:55:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:56:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 96004222-d169-4ff2-b833-ea95ab9b2557 does not exist
Jan 22 09:56:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ef02a21c-fb6d-404e-b175-3c6367303f1e does not exist
Jan 22 09:56:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev afeab573-9a13-4954-99f0-892f3fc67410 does not exist
Jan 22 09:56:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:56:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:01 np0005592157 podman[318321]: 2026-01-22 14:56:01.836024461 +0000 UTC m=+0.048479506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:02.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:02 np0005592157 podman[318321]: 2026-01-22 14:56:02.261406062 +0000 UTC m=+0.473861067 container create 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:56:02 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:56:02 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592157 systemd[1]: Started libpod-conmon-4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07.scope.
Jan 22 09:56:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:56:03 np0005592157 podman[318321]: 2026-01-22 14:56:03.043741652 +0000 UTC m=+1.256196647 container init 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:56:03 np0005592157 podman[318321]: 2026-01-22 14:56:03.053857684 +0000 UTC m=+1.266312669 container start 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:56:03 np0005592157 dreamy_goldberg[318338]: 167 167
Jan 22 09:56:03 np0005592157 systemd[1]: libpod-4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07.scope: Deactivated successfully.
Jan 22 09:56:03 np0005592157 podman[318321]: 2026-01-22 14:56:03.195886543 +0000 UTC m=+1.408341548 container attach 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 09:56:03 np0005592157 podman[318321]: 2026-01-22 14:56:03.197007231 +0000 UTC m=+1.409462226 container died 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:56:03 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2a31571ba1f366ee53c56da1ee7e6d8d2892cc9fb7c2f49c2b25987b0458de5c-merged.mount: Deactivated successfully.
Jan 22 09:56:04 np0005592157 podman[318321]: 2026-01-22 14:56:04.034588335 +0000 UTC m=+2.247043330 container remove 4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:56:04 np0005592157 systemd[1]: libpod-conmon-4b0c1d005cb41afef20190ab110e997c7ef8012c5261b1fc4209d2632cbf0e07.scope: Deactivated successfully.
Jan 22 09:56:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:04 np0005592157 podman[318364]: 2026-01-22 14:56:04.272889037 +0000 UTC m=+0.033149495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:04 np0005592157 podman[318364]: 2026-01-22 14:56:04.375398444 +0000 UTC m=+0.135658822 container create 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:56:04 np0005592157 systemd[1]: Started libpod-conmon-7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6.scope.
Jan 22 09:56:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:04 np0005592157 podman[318364]: 2026-01-22 14:56:04.484668898 +0000 UTC m=+0.244929266 container init 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:56:04 np0005592157 podman[318364]: 2026-01-22 14:56:04.49962955 +0000 UTC m=+0.259889918 container start 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:56:04 np0005592157 podman[318364]: 2026-01-22 14:56:04.522831127 +0000 UTC m=+0.283091495 container attach 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 09:56:04 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:56:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 09:56:05 np0005592157 laughing_babbage[318380]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:56:05 np0005592157 laughing_babbage[318380]: --> relative data size: 1.0
Jan 22 09:56:05 np0005592157 laughing_babbage[318380]: --> All data devices are unavailable
Jan 22 09:56:05 np0005592157 systemd[1]: libpod-7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6.scope: Deactivated successfully.
Jan 22 09:56:05 np0005592157 podman[318364]: 2026-01-22 14:56:05.395191865 +0000 UTC m=+1.155452273 container died 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:56:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-72965de0b3e0e6ce72c7cc6944ae4548a0038aebfea984d0d2aad7ba8bf55a73-merged.mount: Deactivated successfully.
Jan 22 09:56:05 np0005592157 podman[318364]: 2026-01-22 14:56:05.575318301 +0000 UTC m=+1.335578689 container remove 7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_babbage, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:56:05 np0005592157 systemd[1]: libpod-conmon-7a840490123dab4b3d7d67b803d1019e1e188504de8caead78799f8d2793e9c6.scope: Deactivated successfully.
Jan 22 09:56:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:05.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:05 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.289750775 +0000 UTC m=+0.053964952 container create 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:56:06 np0005592157 systemd[1]: Started libpod-conmon-43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4.scope.
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.264092867 +0000 UTC m=+0.028307054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.386096519 +0000 UTC m=+0.150310706 container init 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.398714093 +0000 UTC m=+0.162928230 container start 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:56:06 np0005592157 quirky_margulis[318563]: 167 167
Jan 22 09:56:06 np0005592157 systemd[1]: libpod-43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4.scope: Deactivated successfully.
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.406175238 +0000 UTC m=+0.170389375 container attach 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.406850605 +0000 UTC m=+0.171064732 container died 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:56:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-342f723e0ab01d922db4572da647991eb01826ae0763299b49761c867070fd0e-merged.mount: Deactivated successfully.
Jan 22 09:56:06 np0005592157 podman[318546]: 2026-01-22 14:56:06.450888029 +0000 UTC m=+0.215102176 container remove 43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_margulis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:56:06 np0005592157 systemd[1]: libpod-conmon-43c886d8f00a69ebb0a32ea759b5074fc3e2b6d6e0bf251c920a20901a38caf4.scope: Deactivated successfully.
Jan 22 09:56:06 np0005592157 podman[318587]: 2026-01-22 14:56:06.702286516 +0000 UTC m=+0.079983888 container create f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:56:06 np0005592157 systemd[1]: Started libpod-conmon-f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248.scope.
Jan 22 09:56:06 np0005592157 podman[318587]: 2026-01-22 14:56:06.66983668 +0000 UTC m=+0.047534132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdefe71b50cc704bdf4744c9408d6d5639ff65e7b13625a0d17b271b0d368af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdefe71b50cc704bdf4744c9408d6d5639ff65e7b13625a0d17b271b0d368af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdefe71b50cc704bdf4744c9408d6d5639ff65e7b13625a0d17b271b0d368af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdefe71b50cc704bdf4744c9408d6d5639ff65e7b13625a0d17b271b0d368af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:06 np0005592157 podman[318587]: 2026-01-22 14:56:06.824447792 +0000 UTC m=+0.202145274 container init f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 09:56:06 np0005592157 podman[318587]: 2026-01-22 14:56:06.837337932 +0000 UTC m=+0.215035304 container start f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:56:06 np0005592157 podman[318587]: 2026-01-22 14:56:06.841071975 +0000 UTC m=+0.218769547 container attach f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:56:06 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 11 op/s
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]: {
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:    "0": [
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:        {
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "devices": [
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "/dev/loop3"
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            ],
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "lv_name": "ceph_lv0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "lv_size": "7511998464",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "name": "ceph_lv0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "tags": {
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.cluster_name": "ceph",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.crush_device_class": "",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.encrypted": "0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.osd_id": "0",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.type": "block",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:                "ceph.vdo": "0"
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            },
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "type": "block",
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:            "vg_name": "ceph_vg0"
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:        }
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]:    ]
Jan 22 09:56:07 np0005592157 eloquent_wilbur[318603]: }
Jan 22 09:56:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:07.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:07 np0005592157 systemd[1]: libpod-f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248.scope: Deactivated successfully.
Jan 22 09:56:07 np0005592157 podman[318587]: 2026-01-22 14:56:07.669131502 +0000 UTC m=+1.046828874 container died f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:56:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bcdefe71b50cc704bdf4744c9408d6d5639ff65e7b13625a0d17b271b0d368af-merged.mount: Deactivated successfully.
Jan 22 09:56:07 np0005592157 podman[318587]: 2026-01-22 14:56:07.870008154 +0000 UTC m=+1.247705526 container remove f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 09:56:07 np0005592157 systemd[1]: libpod-conmon-f3fd1f1c69a8901008ef72997adf63e29d7a221e47be72b4400f16755e162248.scope: Deactivated successfully.
Jan 22 09:56:07 np0005592157 podman[318613]: 2026-01-22 14:56:07.893804136 +0000 UTC m=+0.191712096 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:56:07 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.657478572 +0000 UTC m=+0.054095665 container create 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:56:08 np0005592157 systemd[1]: Started libpod-conmon-46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598.scope.
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.638560352 +0000 UTC m=+0.035177465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.759782594 +0000 UTC m=+0.156399767 container init 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.773031753 +0000 UTC m=+0.169648866 container start 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.776282684 +0000 UTC m=+0.172899817 container attach 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:56:08 np0005592157 systemd[1]: libpod-46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598.scope: Deactivated successfully.
Jan 22 09:56:08 np0005592157 happy_banzai[318806]: 167 167
Jan 22 09:56:08 np0005592157 conmon[318806]: conmon 46e361f22c3d50f879f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598.scope/container/memory.events
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.784217791 +0000 UTC m=+0.180834924 container died 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:56:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7f3dd2329dddd4aeec7dd6fef20ab66c73cfd93a6018b8cb8147e32c783c75eb-merged.mount: Deactivated successfully.
Jan 22 09:56:08 np0005592157 podman[318790]: 2026-01-22 14:56:08.839425233 +0000 UTC m=+0.236042336 container remove 46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banzai, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:56:08 np0005592157 systemd[1]: libpod-conmon-46e361f22c3d50f879f67bb6bab4b447b11e60bb23de8619557566091d25e598.scope: Deactivated successfully.
Jan 22 09:56:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:08 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:09 np0005592157 podman[318830]: 2026-01-22 14:56:09.107074464 +0000 UTC m=+0.067097878 container create 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:56:09 np0005592157 systemd[1]: Started libpod-conmon-2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867.scope.
Jan 22 09:56:09 np0005592157 podman[318830]: 2026-01-22 14:56:09.076266749 +0000 UTC m=+0.036290263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:56:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:56:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5aec353619c16df06df9278580aefb27dcb4e1804bbc86a18da688bbaf4886d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5aec353619c16df06df9278580aefb27dcb4e1804bbc86a18da688bbaf4886d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5aec353619c16df06df9278580aefb27dcb4e1804bbc86a18da688bbaf4886d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5aec353619c16df06df9278580aefb27dcb4e1804bbc86a18da688bbaf4886d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:56:09 np0005592157 podman[318830]: 2026-01-22 14:56:09.22241081 +0000 UTC m=+0.182434264 container init 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:56:09 np0005592157 podman[318830]: 2026-01-22 14:56:09.231319122 +0000 UTC m=+0.191342536 container start 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:56:09 np0005592157 podman[318830]: 2026-01-22 14:56:09.235124846 +0000 UTC m=+0.195148290 container attach 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 09:56:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:10 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]: {
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:        "osd_id": 0,
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:        "type": "bluestore"
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]:    }
Jan 22 09:56:10 np0005592157 stupefied_chaplygin[318847]: }
Jan 22 09:56:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:10 np0005592157 systemd[1]: libpod-2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867.scope: Deactivated successfully.
Jan 22 09:56:10 np0005592157 podman[318871]: 2026-01-22 14:56:10.182515539 +0000 UTC m=+0.035680428 container died 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:56:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5aec353619c16df06df9278580aefb27dcb4e1804bbc86a18da688bbaf4886d-merged.mount: Deactivated successfully.
Jan 22 09:56:10 np0005592157 podman[318871]: 2026-01-22 14:56:10.246768896 +0000 UTC m=+0.099933755 container remove 2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 22 09:56:10 np0005592157 systemd[1]: libpod-conmon-2424f8597bbe3d7d99bf6a042a0c208388cf462b77e4e9223e6af92f84e71867.scope: Deactivated successfully.
Jan 22 09:56:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:56:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:56:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 366c2cc1-4147-4ead-a097-0de777d88098 does not exist
Jan 22 09:56:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f5851f4a-e432-472c-94a2-64c263ca65f8 does not exist
Jan 22 09:56:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8c49df83-52bb-4beb-8ca5-a297cb6b4a4d does not exist
Jan 22 09:56:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:11 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:11.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:12 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:13 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:13.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:14 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:14 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:15 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:16 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:17 np0005592157 podman[318993]: 2026-01-22 14:56:17.406456041 +0000 UTC m=+0.136359459 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:56:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:17 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:18 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:21.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:23.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:23 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:23 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:23 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:24.140 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:56:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:24.141 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:56:24 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:26 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:27.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:27 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:28 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:29 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:29 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:30.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:31 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:32 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:32.144 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.394342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792394448, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1836, "num_deletes": 251, "total_data_size": 2662891, "memory_usage": 2710912, "flush_reason": "Manual Compaction"}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792512152, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2598388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73110, "largest_seqno": 74945, "table_properties": {"data_size": 2590610, "index_size": 4335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19959, "raw_average_key_size": 21, "raw_value_size": 2573537, "raw_average_value_size": 2749, "num_data_blocks": 186, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093653, "oldest_key_time": 1769093653, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 118065 microseconds, and 8235 cpu microseconds.
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.512407) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2598388 bytes OK
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.512454) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.516028) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.516057) EVENT_LOG_v1 {"time_micros": 1769093792516048, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.516091) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2654936, prev total WAL file size 2654936, number of live WAL files 2.
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.517975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2537KB)], [167(10MB)]
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792518098, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 13241266, "oldest_snapshot_seqno": -1}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 12756 keys, 11618232 bytes, temperature: kUnknown
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792727857, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11618232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11548153, "index_size": 37070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 350781, "raw_average_key_size": 27, "raw_value_size": 11330390, "raw_average_value_size": 888, "num_data_blocks": 1348, "num_entries": 12756, "num_filter_entries": 12756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.728272) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11618232 bytes
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.741687) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.1 rd, 55.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.5) OK, records in: 13273, records dropped: 517 output_compression: NoCompression
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.741734) EVENT_LOG_v1 {"time_micros": 1769093792741714, "job": 104, "event": "compaction_finished", "compaction_time_micros": 209908, "compaction_time_cpu_micros": 34767, "output_level": 6, "num_output_files": 1, "total_output_size": 11618232, "num_input_records": 13273, "num_output_records": 12756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792743051, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792747175, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.517589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.747521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.747698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.747702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.747705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:56:32.747709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:33 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:34 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:35 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:36 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:38 np0005592157 podman[319082]: 2026-01-22 14:56:38.370878674 +0000 UTC m=+0.088094780 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:56:38 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:39 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:39 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:39 np0005592157 ceph-mon[74359]: Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:40 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:41 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:41.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:42 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:43 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:45 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:45 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:46 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:56:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:56:47
Jan 22 09:56:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:56:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:56:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Jan 22 09:56:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:56:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:47.633 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:56:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:56:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:56:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:56:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:47 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:48 np0005592157 podman[319106]: 2026-01-22 14:56:48.393254848 +0000 UTC m=+0.119881851 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:56:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:49 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:50 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:50 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:50 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:51.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:51 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:53 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:56:54 np0005592157 ceph-mon[74359]: Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:55 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:55 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:55.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:56:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:56:56 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:56:57 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:58 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 09:56:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:56:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:56:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:00.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:00 np0005592157 radosgw[91596]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 09:57:00 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:00 np0005592157 ceph-mon[74359]: Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 09:57:01 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:01 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:01.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:57:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:57:02 np0005592157 ceph-mon[74359]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 09:57:03 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:04.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:04 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:04 np0005592157 ceph-mon[74359]: Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:57:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 22 09:57:05 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:05 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:06.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 09:57:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:07 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:08.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:08 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 09:57:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:09 np0005592157 podman[319192]: 2026-01-22 14:57:09.359564393 +0000 UTC m=+0.076141493 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:57:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:57:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:09.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:57:09 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 09:57:10 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:57:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:57:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 41283fdf-faaf-4990-83bb-a64f80e9b50c does not exist
Jan 22 09:57:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f8b4a1d5-c027-47b8-81b0-2ee47a9436d2 does not exist
Jan 22 09:57:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d1d69057-9acd-48ad-8aa5-f0973227fb41 does not exist
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:57:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:57:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.356507268 +0000 UTC m=+0.054236109 container create 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:57:13 np0005592157 systemd[1]: Started libpod-conmon-5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1.scope.
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.332614574 +0000 UTC m=+0.030343445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.450961105 +0000 UTC m=+0.148689966 container init 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.464998014 +0000 UTC m=+0.162726855 container start 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.469102646 +0000 UTC m=+0.166831487 container attach 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:57:13 np0005592157 focused_khorana[319619]: 167 167
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.475258299 +0000 UTC m=+0.172987140 container died 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 09:57:13 np0005592157 systemd[1]: libpod-5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1.scope: Deactivated successfully.
Jan 22 09:57:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d56bfba93b402ef000377814653f125f198a9bfca05eed9428fd4729741acfdb-merged.mount: Deactivated successfully.
Jan 22 09:57:13 np0005592157 podman[319603]: 2026-01-22 14:57:13.533097226 +0000 UTC m=+0.230826077 container remove 5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khorana, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 09:57:13 np0005592157 systemd[1]: libpod-conmon-5f89ef3daa489101765eddb37e2b8433cfebda5206bd857ec68db5eb491e19a1.scope: Deactivated successfully.
Jan 22 09:57:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:13 np0005592157 podman[319645]: 2026-01-22 14:57:13.768531506 +0000 UTC m=+0.047577103 container create deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:57:13 np0005592157 systemd[1]: Started libpod-conmon-deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a.scope.
Jan 22 09:57:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:13 np0005592157 podman[319645]: 2026-01-22 14:57:13.751202346 +0000 UTC m=+0.030247963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:13 np0005592157 podman[319645]: 2026-01-22 14:57:13.87770964 +0000 UTC m=+0.156755257 container init deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:57:13 np0005592157 podman[319645]: 2026-01-22 14:57:13.8869703 +0000 UTC m=+0.166015907 container start deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:57:13 np0005592157 podman[319645]: 2026-01-22 14:57:13.89021522 +0000 UTC m=+0.169260827 container attach deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:14 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:14 np0005592157 upbeat_hypatia[319661]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:57:14 np0005592157 upbeat_hypatia[319661]: --> relative data size: 1.0
Jan 22 09:57:14 np0005592157 upbeat_hypatia[319661]: --> All data devices are unavailable
Jan 22 09:57:14 np0005592157 systemd[1]: libpod-deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a.scope: Deactivated successfully.
Jan 22 09:57:14 np0005592157 podman[319645]: 2026-01-22 14:57:14.702736992 +0000 UTC m=+0.981782629 container died deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:57:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9dd263dedfea15f4329d0ce2118a22d9f65b68b8a8ee5c715c997cb5e925a204-merged.mount: Deactivated successfully.
Jan 22 09:57:14 np0005592157 podman[319645]: 2026-01-22 14:57:14.786531304 +0000 UTC m=+1.065576901 container remove deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:57:14 np0005592157 systemd[1]: libpod-conmon-deee2cccbd131217c4b64fa4e06c7ca9614f6587e55debd2393fa56a7689f61a.scope: Deactivated successfully.
Jan 22 09:57:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.668010849 +0000 UTC m=+0.063413667 container create e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:57:15 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592157 systemd[1]: Started libpod-conmon-e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4.scope.
Jan 22 09:57:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.650083603 +0000 UTC m=+0.045486431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.774500875 +0000 UTC m=+0.169903713 container init e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.783502018 +0000 UTC m=+0.178904846 container start e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.787793355 +0000 UTC m=+0.183196233 container attach e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:57:15 np0005592157 zen_dubinsky[319897]: 167 167
Jan 22 09:57:15 np0005592157 systemd[1]: libpod-e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4.scope: Deactivated successfully.
Jan 22 09:57:15 np0005592157 conmon[319897]: conmon e44e25cb844a900eb9af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4.scope/container/memory.events
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.79240082 +0000 UTC m=+0.187803658 container died e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 09:57:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-013f2983814073a79dc9e9d6d2cb247751256357903245ee0b1d753a8f9dd729-merged.mount: Deactivated successfully.
Jan 22 09:57:15 np0005592157 podman[319880]: 2026-01-22 14:57:15.848323769 +0000 UTC m=+0.243726617 container remove e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:57:15 np0005592157 systemd[1]: libpod-conmon-e44e25cb844a900eb9af9e003e1edde31ca9261eaeb1f17038ab736f6347a0b4.scope: Deactivated successfully.
Jan 22 09:57:16 np0005592157 podman[319923]: 2026-01-22 14:57:16.082549489 +0000 UTC m=+0.062706108 container create cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:57:16 np0005592157 systemd[1]: Started libpod-conmon-cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d.scope.
Jan 22 09:57:16 np0005592157 podman[319923]: 2026-01-22 14:57:16.062524771 +0000 UTC m=+0.042681390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb875b6684246b0482967d329903475424e18cb7886ba9ba28d43cecc5fd6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb875b6684246b0482967d329903475424e18cb7886ba9ba28d43cecc5fd6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb875b6684246b0482967d329903475424e18cb7886ba9ba28d43cecc5fd6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:16 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb875b6684246b0482967d329903475424e18cb7886ba9ba28d43cecc5fd6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:16 np0005592157 podman[319923]: 2026-01-22 14:57:16.207828512 +0000 UTC m=+0.187985141 container init cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 09:57:16 np0005592157 podman[319923]: 2026-01-22 14:57:16.221914822 +0000 UTC m=+0.202071431 container start cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 09:57:16 np0005592157 podman[319923]: 2026-01-22 14:57:16.226695291 +0000 UTC m=+0.206851900 container attach cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:16 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]: {
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:    "0": [
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:        {
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "devices": [
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "/dev/loop3"
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            ],
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "lv_name": "ceph_lv0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "lv_size": "7511998464",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "name": "ceph_lv0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "tags": {
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.cluster_name": "ceph",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.crush_device_class": "",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.encrypted": "0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.osd_id": "0",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.type": "block",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:                "ceph.vdo": "0"
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            },
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "type": "block",
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:            "vg_name": "ceph_vg0"
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:        }
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]:    ]
Jan 22 09:57:17 np0005592157 stupefied_wright[319939]: }
Jan 22 09:57:17 np0005592157 systemd[1]: libpod-cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d.scope: Deactivated successfully.
Jan 22 09:57:17 np0005592157 podman[319923]: 2026-01-22 14:57:17.046314238 +0000 UTC m=+1.026470857 container died cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:57:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dcb875b6684246b0482967d329903475424e18cb7886ba9ba28d43cecc5fd6d9-merged.mount: Deactivated successfully.
Jan 22 09:57:17 np0005592157 podman[319923]: 2026-01-22 14:57:17.113505758 +0000 UTC m=+1.093662357 container remove cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:57:17 np0005592157 systemd[1]: libpod-conmon-cfad6a3d3f10c032b5ccf4c44070b541c0a77d1f70fc813df3c517e798f9068d.scope: Deactivated successfully.
Jan 22 09:57:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:17 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:17 np0005592157 podman[320098]: 2026-01-22 14:57:17.996714066 +0000 UTC m=+0.068808791 container create f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:57:18 np0005592157 systemd[1]: Started libpod-conmon-f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70.scope.
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:17.966013673 +0000 UTC m=+0.038108448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:18.091516732 +0000 UTC m=+0.163611517 container init f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:18.098418303 +0000 UTC m=+0.170513008 container start f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:18.102086364 +0000 UTC m=+0.174181089 container attach f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:57:18 np0005592157 zen_taussig[320114]: 167 167
Jan 22 09:57:18 np0005592157 systemd[1]: libpod-f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70.scope: Deactivated successfully.
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:18.109125419 +0000 UTC m=+0.181220144 container died f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 22 09:57:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5c89360092c64858c79870f20808acc1665084127783da8873d003910562593a-merged.mount: Deactivated successfully.
Jan 22 09:57:18 np0005592157 podman[320098]: 2026-01-22 14:57:18.163662654 +0000 UTC m=+0.235757379 container remove f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 09:57:18 np0005592157 systemd[1]: libpod-conmon-f3505c01c9e94ff3d3f34bbb1622d1bdeb33bd688026a532962158a0427b6e70.scope: Deactivated successfully.
Jan 22 09:57:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:18.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:18 np0005592157 podman[320139]: 2026-01-22 14:57:18.453102487 +0000 UTC m=+0.093431593 container create 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:57:18 np0005592157 systemd[1]: Started libpod-conmon-01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc.scope.
Jan 22 09:57:18 np0005592157 podman[320139]: 2026-01-22 14:57:18.417075222 +0000 UTC m=+0.057404358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:57:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:57:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a3a52eb08cf3bad28d7165869e6fee62eb08ff12c6b164b46235d1e6ba9def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a3a52eb08cf3bad28d7165869e6fee62eb08ff12c6b164b46235d1e6ba9def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a3a52eb08cf3bad28d7165869e6fee62eb08ff12c6b164b46235d1e6ba9def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a3a52eb08cf3bad28d7165869e6fee62eb08ff12c6b164b46235d1e6ba9def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:57:18 np0005592157 podman[320139]: 2026-01-22 14:57:18.563767397 +0000 UTC m=+0.204096543 container init 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:57:18 np0005592157 podman[320139]: 2026-01-22 14:57:18.57998667 +0000 UTC m=+0.220315776 container start 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:57:18 np0005592157 podman[320139]: 2026-01-22 14:57:18.58562248 +0000 UTC m=+0.225951626 container attach 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:57:18 np0005592157 podman[320153]: 2026-01-22 14:57:18.668814677 +0000 UTC m=+0.164387146 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:57:18 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]: {
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:        "osd_id": 0,
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:        "type": "bluestore"
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]:    }
Jan 22 09:57:19 np0005592157 vigorous_vaughan[320156]: }
Jan 22 09:57:19 np0005592157 systemd[1]: libpod-01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc.scope: Deactivated successfully.
Jan 22 09:57:19 np0005592157 podman[320139]: 2026-01-22 14:57:19.507357695 +0000 UTC m=+1.147686761 container died 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:57:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c9a3a52eb08cf3bad28d7165869e6fee62eb08ff12c6b164b46235d1e6ba9def-merged.mount: Deactivated successfully.
Jan 22 09:57:19 np0005592157 podman[320139]: 2026-01-22 14:57:19.567694955 +0000 UTC m=+1.208024061 container remove 01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:57:19 np0005592157 systemd[1]: libpod-conmon-01dba426701bff9dd0c389acbd392bc764b3e9cfcf751bcaaa743870520909fc.scope: Deactivated successfully.
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2ee88235-7d97-45a3-9486-4e8a7c2986db does not exist
Jan 22 09:57:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c300abdf-55cc-42f2-8b3a-9acfc20a640c does not exist
Jan 22 09:57:19 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 78047195-2cec-44da-95f9-e74336fb2f1b does not exist
Jan 22 09:57:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:19 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:20 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:21 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:23 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:24 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:24 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:25 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:57:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:57:26 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:26.363 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:57:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:26.366 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:57:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:27 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:28 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:29 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:29.368 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:57:29 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:29 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:29.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:30 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:31 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:32 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:33 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:34 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:34 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:35 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:57:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:57:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:36 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:37 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:38 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:39 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:39 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:40 np0005592157 podman[320327]: 2026-01-22 14:57:40.350995113 +0000 UTC m=+0.074163013 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:57:40 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:41 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:42 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:43 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:57:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:57:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:45 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:45 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:45.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:46 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:57:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:47 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:57:47
Jan 22 09:57:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:57:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:57:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'backups']
Jan 22 09:57:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:57:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:47.634 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:57:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:47.635 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:57:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:57:47.635 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:57:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:47.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:48 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:49 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:49 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:49 np0005592157 podman[320350]: 2026-01-22 14:57:49.413256216 +0000 UTC m=+0.146514722 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 09:57:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:50 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:51 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:52 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:53 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:54 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:55.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:55 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:57:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 22 09:57:56 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 22 09:57:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 22 09:57:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:57.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:57 np0005592157 ceph-mon[74359]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:57:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:57:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.019603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879019756, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1251, "num_deletes": 250, "total_data_size": 1686491, "memory_usage": 1720968, "flush_reason": "Manual Compaction"}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879035134, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1060655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74946, "largest_seqno": 76196, "table_properties": {"data_size": 1056089, "index_size": 1833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14035, "raw_average_key_size": 21, "raw_value_size": 1045330, "raw_average_value_size": 1615, "num_data_blocks": 80, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093793, "oldest_key_time": 1769093793, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 15586 microseconds, and 7494 cpu microseconds.
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.035214) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1060655 bytes OK
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.035242) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037562) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037591) EVENT_LOG_v1 {"time_micros": 1769093879037582, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1680772, prev total WAL file size 1680772, number of live WAL files 2.
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.038710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1035KB)], [170(11MB)]
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879038764, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12678887, "oldest_snapshot_seqno": -1}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 12924 keys, 9372490 bytes, temperature: kUnknown
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879128234, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 9372490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9305247, "index_size": 33850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32325, "raw_key_size": 355040, "raw_average_key_size": 27, "raw_value_size": 9088409, "raw_average_value_size": 703, "num_data_blocks": 1214, "num_entries": 12924, "num_filter_entries": 12924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.128764) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9372490 bytes
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.130058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 104.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 13403, records dropped: 479 output_compression: NoCompression
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.130091) EVENT_LOG_v1 {"time_micros": 1769093879130075, "job": 106, "event": "compaction_finished", "compaction_time_micros": 89626, "compaction_time_cpu_micros": 25437, "output_level": 6, "num_output_files": 1, "total_output_size": 9372490, "num_input_records": 13403, "num_output_records": 12924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879130696, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879135329, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.038621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.135430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.135439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.135441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.135443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:57:59.135444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:57:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:00 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592157 ceph-mon[74359]: Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:00 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 09:58:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 22 09:58:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 22 09:58:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 22 09:58:01 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:02 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Jan 22 09:58:03 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:03.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:04 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:04 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:58:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001903869753861902 of space, bias 1.0, pg target 0.563545447143123 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:58:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.1 KiB/s wr, 23 op/s
Jan 22 09:58:05 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 22 09:58:06 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 22 09:58:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 22 09:58:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 20 op/s
Jan 22 09:58:07 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:07.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:08 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Jan 22 09:58:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:09 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:09 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:10 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 22 09:58:11 np0005592157 podman[320438]: 2026-01-22 14:58:11.364003764 +0000 UTC m=+0.088401567 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:58:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:11 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:12 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Jan 22 09:58:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:13 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:14 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 26 op/s
Jan 22 09:58:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:15.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:15 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:16 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Jan 22 09:58:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:17 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:58:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:58:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:58:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:58:18 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Jan 22 09:58:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:19 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:20 np0005592157 podman[320513]: 2026-01-22 14:58:20.364878353 +0000 UTC m=+0.109449110 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 543d7a04-6ea4-41c9-bf3e-943a78c9f347 does not exist
Jan 22 09:58:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2bd0ebf8-d148-4c07-be4f-b5655db61343 does not exist
Jan 22 09:58:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5d88e09c-ff49-42bb-8145-47620b7b70ed does not exist
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:58:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:58:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.518298245 +0000 UTC m=+0.063677834 container create 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:58:22 np0005592157 systemd[1]: Started libpod-conmon-757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596.scope.
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.483701845 +0000 UTC m=+0.029081494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.627044907 +0000 UTC m=+0.172424466 container init 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.641208139 +0000 UTC m=+0.186587688 container start 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.645474335 +0000 UTC m=+0.190853884 container attach 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 09:58:22 np0005592157 quizzical_swanson[320831]: 167 167
Jan 22 09:58:22 np0005592157 systemd[1]: libpod-757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596.scope: Deactivated successfully.
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.653095214 +0000 UTC m=+0.198474773 container died 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:58:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dfe37b6417152ef71e5d46254ceaa228d185a42b2582a5e22ecd62c8ebfd394b-merged.mount: Deactivated successfully.
Jan 22 09:58:22 np0005592157 podman[320814]: 2026-01-22 14:58:22.707847565 +0000 UTC m=+0.253227124 container remove 757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:58:22 np0005592157 systemd[1]: libpod-conmon-757e3a257067dee691da08c6a33bbe1224fc5441d3ceb3f7c3585208c72f2596.scope: Deactivated successfully.
Jan 22 09:58:22 np0005592157 podman[320853]: 2026-01-22 14:58:22.944575408 +0000 UTC m=+0.050744442 container create 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:58:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:22 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:22 np0005592157 systemd[1]: Started libpod-conmon-5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6.scope.
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:22.922589121 +0000 UTC m=+0.028758175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:23.047349992 +0000 UTC m=+0.153519116 container init 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:23.057029282 +0000 UTC m=+0.163198336 container start 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:23.062127759 +0000 UTC m=+0.168296823 container attach 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:58:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:23 np0005592157 charming_kepler[320871]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:58:23 np0005592157 charming_kepler[320871]: --> relative data size: 1.0
Jan 22 09:58:23 np0005592157 charming_kepler[320871]: --> All data devices are unavailable
Jan 22 09:58:23 np0005592157 systemd[1]: libpod-5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6.scope: Deactivated successfully.
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:23.902914972 +0000 UTC m=+1.009084006 container died 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 09:58:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b50df974d9f0e2d941732e9ff24606adb63db9ea24cf24e70fb52fe485232fc9-merged.mount: Deactivated successfully.
Jan 22 09:58:23 np0005592157 podman[320853]: 2026-01-22 14:58:23.961375355 +0000 UTC m=+1.067544379 container remove 5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:58:23 np0005592157 systemd[1]: libpod-conmon-5f6c42d9c0282cfe4070ab3710244a6ea7dca31810321b6b9a774f185333c3b6.scope: Deactivated successfully.
Jan 22 09:58:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:24 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.83482371 +0000 UTC m=+0.075052866 container create 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 09:58:24 np0005592157 systemd[1]: Started libpod-conmon-8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18.scope.
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.80505316 +0000 UTC m=+0.045282366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.935873651 +0000 UTC m=+0.176102857 container init 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.945027858 +0000 UTC m=+0.185256974 container start 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.949189882 +0000 UTC m=+0.189419098 container attach 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:58:24 np0005592157 wizardly_greider[321057]: 167 167
Jan 22 09:58:24 np0005592157 systemd[1]: libpod-8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18.scope: Deactivated successfully.
Jan 22 09:58:24 np0005592157 podman[321041]: 2026-01-22 14:58:24.953462148 +0000 UTC m=+0.193691304 container died 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:58:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-828d958aff89771dcf3470666528c65e84dd10b3300083cd8170fd8639cbe998-merged.mount: Deactivated successfully.
Jan 22 09:58:25 np0005592157 podman[321041]: 2026-01-22 14:58:25.003002399 +0000 UTC m=+0.243231555 container remove 8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:58:25 np0005592157 systemd[1]: libpod-conmon-8812032192eea7b62a05c68a09598a2dd84ff5bea740109f73667b69cd80cd18.scope: Deactivated successfully.
Jan 22 09:58:25 np0005592157 podman[321083]: 2026-01-22 14:58:25.215088919 +0000 UTC m=+0.066545505 container create 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:58:25 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:25 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:25 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:25 np0005592157 systemd[1]: Started libpod-conmon-65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7.scope.
Jan 22 09:58:25 np0005592157 podman[321083]: 2026-01-22 14:58:25.187435401 +0000 UTC m=+0.038892037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe8e74bfe830fffb333895d1a06795adde8fb524c2c85fcceb7f1459780661b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe8e74bfe830fffb333895d1a06795adde8fb524c2c85fcceb7f1459780661b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe8e74bfe830fffb333895d1a06795adde8fb524c2c85fcceb7f1459780661b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe8e74bfe830fffb333895d1a06795adde8fb524c2c85fcceb7f1459780661b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:25 np0005592157 podman[321083]: 2026-01-22 14:58:25.327289467 +0000 UTC m=+0.178746093 container init 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 09:58:25 np0005592157 podman[321083]: 2026-01-22 14:58:25.338099716 +0000 UTC m=+0.189556302 container start 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:58:25 np0005592157 podman[321083]: 2026-01-22 14:58:25.34226718 +0000 UTC m=+0.193723766 container attach 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 09:58:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]: {
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:    "0": [
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:        {
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "devices": [
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "/dev/loop3"
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            ],
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "lv_name": "ceph_lv0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "lv_size": "7511998464",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "name": "ceph_lv0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "tags": {
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.cluster_name": "ceph",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.crush_device_class": "",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.encrypted": "0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.osd_id": "0",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.type": "block",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:                "ceph.vdo": "0"
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            },
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "type": "block",
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:            "vg_name": "ceph_vg0"
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:        }
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]:    ]
Jan 22 09:58:26 np0005592157 musing_chandrasekhar[321100]: }
Jan 22 09:58:26 np0005592157 systemd[1]: libpod-65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7.scope: Deactivated successfully.
Jan 22 09:58:26 np0005592157 podman[321083]: 2026-01-22 14:58:26.104175993 +0000 UTC m=+0.955632579 container died 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:58:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:26 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5fe8e74bfe830fffb333895d1a06795adde8fb524c2c85fcceb7f1459780661b-merged.mount: Deactivated successfully.
Jan 22 09:58:26 np0005592157 podman[321083]: 2026-01-22 14:58:26.579768821 +0000 UTC m=+1.431225377 container remove 65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:58:26 np0005592157 systemd[1]: libpod-conmon-65afe9232e88059b61c938f52487b041969c033161532f864fd630bc728421f7.scope: Deactivated successfully.
Jan 22 09:58:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.415347376 +0000 UTC m=+0.061611582 container create c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 09:58:27 np0005592157 systemd[1]: Started libpod-conmon-c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18.scope.
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.386119269 +0000 UTC m=+0.032383555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.520861408 +0000 UTC m=+0.167125664 container init c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:58:27 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.534354263 +0000 UTC m=+0.180618509 container start c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.539198794 +0000 UTC m=+0.185463050 container attach c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 09:58:27 np0005592157 focused_perlman[321281]: 167 167
Jan 22 09:58:27 np0005592157 systemd[1]: libpod-c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18.scope: Deactivated successfully.
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.545563532 +0000 UTC m=+0.191827768 container died c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:58:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bbaafbd1c00c9ea6ce5dd551ab65df187691189ad9c9e84ccfe45b3502e4be6b-merged.mount: Deactivated successfully.
Jan 22 09:58:27 np0005592157 podman[321265]: 2026-01-22 14:58:27.595588915 +0000 UTC m=+0.241853131 container remove c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_perlman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 09:58:27 np0005592157 systemd[1]: libpod-conmon-c79d1c62b9c409b374d609c8e0d729624ece6b237213a8243b4a56c3e044eb18.scope: Deactivated successfully.
Jan 22 09:58:27 np0005592157 podman[321307]: 2026-01-22 14:58:27.791860281 +0000 UTC m=+0.054389822 container create f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:58:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:27 np0005592157 systemd[1]: Started libpod-conmon-f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6.scope.
Jan 22 09:58:27 np0005592157 podman[321307]: 2026-01-22 14:58:27.761289662 +0000 UTC m=+0.023819243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:58:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:58:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9450e7e16cf745a0dc9eab4fc70e6bfd983279b6e5291819c14bcb6ec271d468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9450e7e16cf745a0dc9eab4fc70e6bfd983279b6e5291819c14bcb6ec271d468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9450e7e16cf745a0dc9eab4fc70e6bfd983279b6e5291819c14bcb6ec271d468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9450e7e16cf745a0dc9eab4fc70e6bfd983279b6e5291819c14bcb6ec271d468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:58:27 np0005592157 podman[321307]: 2026-01-22 14:58:27.887218601 +0000 UTC m=+0.149748142 container init f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:58:27 np0005592157 podman[321307]: 2026-01-22 14:58:27.898497181 +0000 UTC m=+0.161026682 container start f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:58:27 np0005592157 podman[321307]: 2026-01-22 14:58:27.902986173 +0000 UTC m=+0.165515774 container attach f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:58:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:28.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:28 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]: {
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:        "osd_id": 0,
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:        "type": "bluestore"
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]:    }
Jan 22 09:58:28 np0005592157 hopeful_lumiere[321324]: }
Jan 22 09:58:28 np0005592157 systemd[1]: libpod-f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6.scope: Deactivated successfully.
Jan 22 09:58:28 np0005592157 podman[321307]: 2026-01-22 14:58:28.776818338 +0000 UTC m=+1.039347859 container died f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 09:58:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9450e7e16cf745a0dc9eab4fc70e6bfd983279b6e5291819c14bcb6ec271d468-merged.mount: Deactivated successfully.
Jan 22 09:58:28 np0005592157 podman[321307]: 2026-01-22 14:58:28.839115476 +0000 UTC m=+1.101644977 container remove f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:58:28 np0005592157 systemd[1]: libpod-conmon-f9bb293a4428e5d1488f7b1bb6dd9f5d7f40aa6c124592bc1d514bce75467ca6.scope: Deactivated successfully.
Jan 22 09:58:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:58:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:58:28 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 32e2fbd0-f2eb-48b1-a8bd-bf0cfeb43719 does not exist
Jan 22 09:58:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a9655870-339c-4701-bf95-b0f2e5e81bb6 does not exist
Jan 22 09:58:28 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f5e68917-8d64-41bd-abba-3a02a6bd051d does not exist
Jan 22 09:58:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:31 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:31.487 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:58:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:31.490 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:58:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:31.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:32 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:33 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 09:58:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 22 09:58:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 102 B/s rd, 0 op/s
Jan 22 09:58:35 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:35.493 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:58:35 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 22 09:58:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 09:58:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 22 09:58:36 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Jan 22 09:58:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 22 09:58:37 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 09:58:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 22 09:58:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:38 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 5.0 KiB/s wr, 89 op/s
Jan 22 09:58:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:39 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:40 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 6.6 KiB/s wr, 117 op/s
Jan 22 09:58:41 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:41.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:58:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:58:42 np0005592157 podman[321464]: 2026-01-22 14:58:42.371701874 +0000 UTC m=+0.091369061 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 09:58:42 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 5.2 KiB/s wr, 93 op/s
Jan 22 09:58:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:43 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 22 09:58:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:44.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:45.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:58:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:46.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:58:46 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Jan 22 09:58:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:58:47
Jan 22 09:58:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:58:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:58:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 22 09:58:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:58:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:47.635 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:58:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:58:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:58:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:58:47 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:48 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 22 09:58:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:49 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:49 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:49.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:50 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:51 np0005592157 podman[321489]: 2026-01-22 14:58:51.385605548 +0000 UTC m=+0.106639871 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:58:51 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:58:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:58:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:52 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:53 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:53.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.045115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934045167, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 975, "num_deletes": 259, "total_data_size": 1165887, "memory_usage": 1191168, "flush_reason": "Manual Compaction"}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934057595, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1147709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76197, "largest_seqno": 77171, "table_properties": {"data_size": 1143055, "index_size": 2113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11768, "raw_average_key_size": 20, "raw_value_size": 1132988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093880, "oldest_key_time": 1769093880, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 12987 microseconds, and 6560 cpu microseconds.
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058093) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1147709 bytes OK
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058130) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.060544) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.060571) EVENT_LOG_v1 {"time_micros": 1769093934060563, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.060594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1161119, prev total WAL file size 1161119, number of live WAL files 2.
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.061447) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1120KB)], [173(9152KB)]
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934061517, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 10520199, "oldest_snapshot_seqno": -1}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 12964 keys, 10366272 bytes, temperature: kUnknown
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934166528, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10366272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297461, "index_size": 35290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32453, "raw_key_size": 357225, "raw_average_key_size": 27, "raw_value_size": 10078654, "raw_average_value_size": 777, "num_data_blocks": 1270, "num_entries": 12964, "num_filter_entries": 12964, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.167146) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10366272 bytes
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.169161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.0 rd, 98.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.9 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.2) write-amplify(9.0) OK, records in: 13499, records dropped: 535 output_compression: NoCompression
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.169194) EVENT_LOG_v1 {"time_micros": 1769093934169178, "job": 108, "event": "compaction_finished", "compaction_time_micros": 105211, "compaction_time_cpu_micros": 48380, "output_level": 6, "num_output_files": 1, "total_output_size": 10366272, "num_input_records": 13499, "num_output_records": 12964, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934169796, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934173229, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.061341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.173426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.173437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.173441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.173444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:58:54.173446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:54.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:55 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:56 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:58:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:57.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:58:57 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:58 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:58:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:58:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:59.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:59 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:00.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:00 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:01.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:01 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:02 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:59:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:03.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:59:04 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 09:59:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:05 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:05 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:05.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:06 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:07 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:07.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:08 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:59:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:59:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:09 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:09 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:09.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:10 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:11 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:11.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:12 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:13 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:13 np0005592157 podman[321576]: 2026-01-22 14:59:13.342797472 +0000 UTC m=+0.074488352 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:59:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:13.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:14 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:14 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:15 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:16 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.312602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957312722, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 251, "total_data_size": 439957, "memory_usage": 450952, "flush_reason": "Manual Compaction"}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957331987, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 432980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77172, "largest_seqno": 77693, "table_properties": {"data_size": 430246, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7232, "raw_average_key_size": 19, "raw_value_size": 424544, "raw_average_value_size": 1141, "num_data_blocks": 31, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093934, "oldest_key_time": 1769093934, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 19472 microseconds, and 3557 cpu microseconds.
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.332095) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 432980 bytes OK
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.332126) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.334209) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.334233) EVENT_LOG_v1 {"time_micros": 1769093957334225, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.334258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 436952, prev total WAL file size 436952, number of live WAL files 2.
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.334979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(422KB)], [176(10123KB)]
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957335065, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10799252, "oldest_snapshot_seqno": -1}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 12825 keys, 9181372 bytes, temperature: kUnknown
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957423183, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9181372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9114441, "index_size": 33794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32069, "raw_key_size": 355179, "raw_average_key_size": 27, "raw_value_size": 8898606, "raw_average_value_size": 693, "num_data_blocks": 1203, "num_entries": 12825, "num_filter_entries": 12825, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.423485) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9181372 bytes
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.424877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 104.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(46.1) write-amplify(21.2) OK, records in: 13336, records dropped: 511 output_compression: NoCompression
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.424894) EVENT_LOG_v1 {"time_micros": 1769093957424886, "job": 110, "event": "compaction_finished", "compaction_time_micros": 88219, "compaction_time_cpu_micros": 49264, "output_level": 6, "num_output_files": 1, "total_output_size": 9181372, "num_input_records": 13336, "num_output_records": 12825, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957425261, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957427209, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.334795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.427349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.427357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.427359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.427360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-14:59:17.427361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:17.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:18 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:19 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:19 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:19.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:20.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:20 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:21 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:21.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:22 np0005592157 podman[321650]: 2026-01-22 14:59:22.362208751 +0000 UTC m=+0.101651417 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:59:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:22.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:22 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:59:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:59:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:24 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:25 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:26 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:27 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:30.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 09:59:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:31.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:32 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:32.546 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:59:32 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:32.549 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:59:32 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 09:59:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 09:59:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 09:59:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:33.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 49b97644-fea5-41f2-a4a6-90f590155b12 does not exist
Jan 22 09:59:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e9074c65-b2d1-4a7a-aabe-fed72563972a does not exist
Jan 22 09:59:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d50da02f-d6b9-4d45-b9ec-32d1f5385aca does not exist
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 09:59:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 09:59:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.565709194 +0000 UTC m=+0.045519662 container create 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:59:34 np0005592157 systemd[1]: Started libpod-conmon-8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85.scope.
Jan 22 09:59:34 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.548220399 +0000 UTC m=+0.028030887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.647113187 +0000 UTC m=+0.126923655 container init 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.655136856 +0000 UTC m=+0.134947324 container start 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.658715295 +0000 UTC m=+0.138525783 container attach 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:59:34 np0005592157 clever_nobel[321968]: 167 167
Jan 22 09:59:34 np0005592157 systemd[1]: libpod-8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85.scope: Deactivated successfully.
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.663034792 +0000 UTC m=+0.142845300 container died 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:59:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-742a7a0975e5aa4410cfc017776322af7e9e16c35e2998006a2be8ffb9b962c7-merged.mount: Deactivated successfully.
Jan 22 09:59:34 np0005592157 podman[321951]: 2026-01-22 14:59:34.709958468 +0000 UTC m=+0.189768936 container remove 8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:59:34 np0005592157 systemd[1]: libpod-conmon-8417ddb60cd4922a7f3d7ecaa5b18fdfe94cf6c73cb9192bc8c2441e8394bf85.scope: Deactivated successfully.
Jan 22 09:59:34 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:59:34 np0005592157 podman[321992]: 2026-01-22 14:59:34.94504658 +0000 UTC m=+0.069275212 container create d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:59:34 np0005592157 systemd[1]: Started libpod-conmon-d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35.scope.
Jan 22 09:59:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:35 np0005592157 podman[321992]: 2026-01-22 14:59:34.922459089 +0000 UTC m=+0.046687731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:35 np0005592157 podman[321992]: 2026-01-22 14:59:35.094298049 +0000 UTC m=+0.218526661 container init d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:59:35 np0005592157 podman[321992]: 2026-01-22 14:59:35.106619535 +0000 UTC m=+0.230848127 container start d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 09:59:35 np0005592157 podman[321992]: 2026-01-22 14:59:35.110740068 +0000 UTC m=+0.234968760 container attach d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:59:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:35 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:35 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:35 np0005592157 nifty_elbakyan[322008]: --> passed data devices: 0 physical, 1 LVM
Jan 22 09:59:35 np0005592157 nifty_elbakyan[322008]: --> relative data size: 1.0
Jan 22 09:59:35 np0005592157 nifty_elbakyan[322008]: --> All data devices are unavailable
Jan 22 09:59:36 np0005592157 systemd[1]: libpod-d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35.scope: Deactivated successfully.
Jan 22 09:59:36 np0005592157 podman[321992]: 2026-01-22 14:59:36.023385436 +0000 UTC m=+1.147614068 container died d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 09:59:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4d50b6f7061d11c78cf952fbc1875bd7bb96e42ef9c924c7d2c925de3b6504b0-merged.mount: Deactivated successfully.
Jan 22 09:59:36 np0005592157 podman[321992]: 2026-01-22 14:59:36.084250398 +0000 UTC m=+1.208479000 container remove d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elbakyan, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:59:36 np0005592157 systemd[1]: libpod-conmon-d2b4ce7ceb490b6dc989654fadd7cb29eb3a2b66ceff6e587edbc5811c81bc35.scope: Deactivated successfully.
Jan 22 09:59:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:36.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.854076399 +0000 UTC m=+0.046769984 container create 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:36 np0005592157 systemd[1]: Started libpod-conmon-982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13.scope.
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.829313323 +0000 UTC m=+0.022006998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.945295825 +0000 UTC m=+0.137989410 container init 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.952454633 +0000 UTC m=+0.145148218 container start 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.955476308 +0000 UTC m=+0.148169893 container attach 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 09:59:36 np0005592157 inspiring_newton[322242]: 167 167
Jan 22 09:59:36 np0005592157 systemd[1]: libpod-982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13.scope: Deactivated successfully.
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.957423897 +0000 UTC m=+0.150117502 container died 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 09:59:36 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5050b1f48400602cbfe11669dd2737044893f826ce714a9f19ca9149b158c4d5-merged.mount: Deactivated successfully.
Jan 22 09:59:36 np0005592157 podman[322226]: 2026-01-22 14:59:36.9953863 +0000 UTC m=+0.188079895 container remove 982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:59:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:37 np0005592157 systemd[1]: libpod-conmon-982a1a294002cf4bce0732c4630df7c3f805c306b9c6d9a5b5af1d73094b4e13.scope: Deactivated successfully.
Jan 22 09:59:37 np0005592157 podman[322266]: 2026-01-22 14:59:37.184063139 +0000 UTC m=+0.051625704 container create bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 09:59:37 np0005592157 systemd[1]: Started libpod-conmon-bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6.scope.
Jan 22 09:59:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e226bbeec2f35b47cf2da4e90ddb364375050e412524c91d3cf73657ff4fab71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e226bbeec2f35b47cf2da4e90ddb364375050e412524c91d3cf73657ff4fab71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e226bbeec2f35b47cf2da4e90ddb364375050e412524c91d3cf73657ff4fab71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e226bbeec2f35b47cf2da4e90ddb364375050e412524c91d3cf73657ff4fab71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:37 np0005592157 podman[322266]: 2026-01-22 14:59:37.163308613 +0000 UTC m=+0.030871228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:37 np0005592157 podman[322266]: 2026-01-22 14:59:37.270162198 +0000 UTC m=+0.137724773 container init bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:37 np0005592157 podman[322266]: 2026-01-22 14:59:37.276012374 +0000 UTC m=+0.143574969 container start bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:59:37 np0005592157 podman[322266]: 2026-01-22 14:59:37.279870319 +0000 UTC m=+0.147432904 container attach bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:59:37 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:37.552 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:59:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:37 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]: {
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:    "0": [
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:        {
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "devices": [
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "/dev/loop3"
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            ],
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "lv_name": "ceph_lv0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "lv_size": "7511998464",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "name": "ceph_lv0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "tags": {
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.cephx_lockbox_secret": "",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.cluster_name": "ceph",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.crush_device_class": "",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.encrypted": "0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.osd_id": "0",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.type": "block",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:                "ceph.vdo": "0"
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            },
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "type": "block",
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:            "vg_name": "ceph_vg0"
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:        }
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]:    ]
Jan 22 09:59:37 np0005592157 hungry_babbage[322282]: }
Jan 22 09:59:38 np0005592157 systemd[1]: libpod-bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6.scope: Deactivated successfully.
Jan 22 09:59:38 np0005592157 podman[322266]: 2026-01-22 14:59:38.015292305 +0000 UTC m=+0.882854880 container died bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:59:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e226bbeec2f35b47cf2da4e90ddb364375050e412524c91d3cf73657ff4fab71-merged.mount: Deactivated successfully.
Jan 22 09:59:38 np0005592157 podman[322266]: 2026-01-22 14:59:38.067747338 +0000 UTC m=+0.935309903 container remove bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 09:59:38 np0005592157 systemd[1]: libpod-conmon-bdc6dca95e0ee9a9202d314a343e12d176f0955c561d43b7275a1d0535cac5f6.scope: Deactivated successfully.
Jan 22 09:59:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:38.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.657659538 +0000 UTC m=+0.038013856 container create 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:38 np0005592157 systemd[1]: Started libpod-conmon-97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5.scope.
Jan 22 09:59:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.730919268 +0000 UTC m=+0.111273636 container init 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.641039765 +0000 UTC m=+0.021394123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.741487471 +0000 UTC m=+0.121841819 container start 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.744503486 +0000 UTC m=+0.124857844 container attach 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 09:59:38 np0005592157 silly_banach[322461]: 167 167
Jan 22 09:59:38 np0005592157 systemd[1]: libpod-97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5.scope: Deactivated successfully.
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.746420153 +0000 UTC m=+0.126774491 container died 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:59:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0b07548dcdbb8894d6e10a801187886dc4ed4168d0ed52cc5149fda1d2c852da-merged.mount: Deactivated successfully.
Jan 22 09:59:38 np0005592157 podman[322445]: 2026-01-22 14:59:38.782799717 +0000 UTC m=+0.163154085 container remove 97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:59:38 np0005592157 systemd[1]: libpod-conmon-97f2e86a10e3ebeb5506f7b1d8892c2364fe39df7134efc667271ac56b25ffe5.scope: Deactivated successfully.
Jan 22 09:59:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.002882326 +0000 UTC m=+0.062048103 container create e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:39 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:39 np0005592157 systemd[1]: Started libpod-conmon-e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e.scope.
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:38.976682495 +0000 UTC m=+0.035848302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:59:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 09:59:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e45fa4edad00891e3ccd18561625a395d67e566ac3b6dce02a9f7ca48adc15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e45fa4edad00891e3ccd18561625a395d67e566ac3b6dce02a9f7ca48adc15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e45fa4edad00891e3ccd18561625a395d67e566ac3b6dce02a9f7ca48adc15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e45fa4edad00891e3ccd18561625a395d67e566ac3b6dce02a9f7ca48adc15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.103272111 +0000 UTC m=+0.162437938 container init e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.115348331 +0000 UTC m=+0.174514118 container start e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.118806597 +0000 UTC m=+0.177972424 container attach e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]: {
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:        "osd_id": 0,
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:        "type": "bluestore"
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]:    }
Jan 22 09:59:39 np0005592157 cranky_kilby[322500]: }
Jan 22 09:59:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:39.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:39 np0005592157 systemd[1]: libpod-e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e.scope: Deactivated successfully.
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.909552756 +0000 UTC m=+0.968718543 container died e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:59:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-21e45fa4edad00891e3ccd18561625a395d67e566ac3b6dce02a9f7ca48adc15-merged.mount: Deactivated successfully.
Jan 22 09:59:39 np0005592157 podman[322484]: 2026-01-22 14:59:39.968909871 +0000 UTC m=+1.028075668 container remove e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 09:59:39 np0005592157 systemd[1]: libpod-conmon-e6a36bd91dc24efdeb756d190a2a38b35109dfcda4a8ab57b28a3831678f201e.scope: Deactivated successfully.
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b39c8f08-839f-4978-bb35-3519085aa5df does not exist
Jan 22 09:59:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7ed94550-3e04-44c6-a8db-b86cf1c3664d does not exist
Jan 22 09:59:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e05fff6c-e437-4db0-baaa-a3413cc27355 does not exist
Jan 22 09:59:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:40.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:41 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:41.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:42 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:59:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:59:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:43 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:43.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:44 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:44 np0005592157 podman[322584]: 2026-01-22 14:59:44.365991077 +0000 UTC m=+0.089905295 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:59:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:59:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:59:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:45 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:45 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:45.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:46 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 09:59:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:47 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_14:59:47
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups']
Jan 22 09:59:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 09:59:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:59:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:59:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 14:59:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:59:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:47.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:48 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:49 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:49.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:50 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:51 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:51.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:52 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:53 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:53 np0005592157 podman[322608]: 2026-01-22 14:59:53.352395176 +0000 UTC m=+0.094988721 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:59:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.007000174s ======
Jan 22 09:59:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:53.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000174s
Jan 22 09:59:54 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:55 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:55 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:55.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:56 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:57 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:59:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:59:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:58 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 09:59:59 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 09:59:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:59.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:01 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:01.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:02.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:02 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:03 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:04 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:00:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:00:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:05 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:05 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:06 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:07 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:08 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:09 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:10.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:10 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:11 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:12 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:13 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:14.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:14 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:15 np0005592157 podman[322695]: 2026-01-22 15:00:15.345377179 +0000 UTC m=+0.076815367 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 22 10:00:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:16 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:16 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:17 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:18 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:19 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:20 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:21 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:21.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:22 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:22.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:23 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:23.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:24 np0005592157 podman[322769]: 2026-01-22 15:00:24.384540607 +0000 UTC m=+0.123905300 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 10:00:24 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:24.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:25 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:25 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:26 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:00:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:00:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:00:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:00:27 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:27.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:28.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:28 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 22 10:00:29 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:29.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:30.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:30 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 10:00:31 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:31.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:32.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:32 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 10:00:33 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:33.905 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:00:33 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:33.906 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:00:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:34 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Jan 22 10:00:35 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:35.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:36 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:36 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:36.909 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:00:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 938 B/s wr, 23 op/s
Jan 22 10:00:37 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:37.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:38 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 22 10:00:39 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:40 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 767 B/s wr, 25 op/s
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9080be3b-6030-4520-8b19-112f8499c55b does not exist
Jan 22 10:00:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eb50ccf0-b185-41f9-bd4d-d8e5d24198db does not exist
Jan 22 10:00:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 774658da-f633-451c-8006-0cbcc1298612 does not exist
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:00:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:41.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.068968498 +0000 UTC m=+0.054091667 container create 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:00:42 np0005592157 systemd[1]: Started libpod-conmon-5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8.scope.
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.041309255 +0000 UTC m=+0.026432464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.162139618 +0000 UTC m=+0.147262777 container init 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.16952794 +0000 UTC m=+0.154651069 container start 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.173670903 +0000 UTC m=+0.158794032 container attach 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 10:00:42 np0005592157 jovial_spence[323143]: 167 167
Jan 22 10:00:42 np0005592157 systemd[1]: libpod-5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8.scope: Deactivated successfully.
Jan 22 10:00:42 np0005592157 conmon[323143]: conmon 5f226b89bcdc107e21d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8.scope/container/memory.events
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.179993429 +0000 UTC m=+0.165116558 container died 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:00:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c239c4ec411576a7d9a6355754b3dc20dfbf370aa337331f56ba7be995859a4b-merged.mount: Deactivated successfully.
Jan 22 10:00:42 np0005592157 podman[323127]: 2026-01-22 15:00:42.241690122 +0000 UTC m=+0.226813251 container remove 5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 10:00:42 np0005592157 systemd[1]: libpod-conmon-5f226b89bcdc107e21d1a612796b2a352c1a9bc1b460cea5c60ba9a4c7827ae8.scope: Deactivated successfully.
Jan 22 10:00:42 np0005592157 podman[323167]: 2026-01-22 15:00:42.466534263 +0000 UTC m=+0.059067479 container create c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:00:42 np0005592157 systemd[1]: Started libpod-conmon-c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325.scope.
Jan 22 10:00:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:42 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:42 np0005592157 podman[323167]: 2026-01-22 15:00:42.435178059 +0000 UTC m=+0.027711345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:42 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:42 np0005592157 podman[323167]: 2026-01-22 15:00:42.544658472 +0000 UTC m=+0.137191718 container init c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 10:00:42 np0005592157 podman[323167]: 2026-01-22 15:00:42.557401316 +0000 UTC m=+0.149934512 container start c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:00:42 np0005592157 podman[323167]: 2026-01-22 15:00:42.561115568 +0000 UTC m=+0.153648774 container attach c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:00:42 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 10:00:43 np0005592157 condescending_feynman[323183]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:00:43 np0005592157 condescending_feynman[323183]: --> relative data size: 1.0
Jan 22 10:00:43 np0005592157 condescending_feynman[323183]: --> All data devices are unavailable
Jan 22 10:00:43 np0005592157 systemd[1]: libpod-c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325.scope: Deactivated successfully.
Jan 22 10:00:43 np0005592157 podman[323167]: 2026-01-22 15:00:43.430247495 +0000 UTC m=+1.022780701 container died c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:00:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7101713739af7449e0045e6adfe918799ea0854f5f03ebe297046e4db3d79043-merged.mount: Deactivated successfully.
Jan 22 10:00:43 np0005592157 podman[323167]: 2026-01-22 15:00:43.498004668 +0000 UTC m=+1.090537884 container remove c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:00:43 np0005592157 systemd[1]: libpod-conmon-c22ab55d989f15d15d9f475a5294571deaeb60006cc4003fa0127b8a05e46325.scope: Deactivated successfully.
Jan 22 10:00:43 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:43.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.290346378 +0000 UTC m=+0.069459806 container create 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:00:44 np0005592157 systemd[1]: Started libpod-conmon-290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e.scope.
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.261873665 +0000 UTC m=+0.040987133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.387338453 +0000 UTC m=+0.166451931 container init 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.394595612 +0000 UTC m=+0.173709010 container start 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.398420316 +0000 UTC m=+0.177533744 container attach 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:00:44 np0005592157 vigilant_carson[323370]: 167 167
Jan 22 10:00:44 np0005592157 systemd[1]: libpod-290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e.scope: Deactivated successfully.
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.40423813 +0000 UTC m=+0.183351558 container died 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 10:00:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1433ba29641589e5a7cd1d89b90f5b633dbb99efe02e63fb0782ed05f7e2b285-merged.mount: Deactivated successfully.
Jan 22 10:00:44 np0005592157 podman[323354]: 2026-01-22 15:00:44.455142047 +0000 UTC m=+0.234255445 container remove 290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:00:44 np0005592157 systemd[1]: libpod-conmon-290e0918cf92a1e9290cb8bb145c7ef609cb20fc29104ab6ee6052602b3d516e.scope: Deactivated successfully.
Jan 22 10:00:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:44 np0005592157 podman[323393]: 2026-01-22 15:00:44.679294241 +0000 UTC m=+0.053461131 container create 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:00:44 np0005592157 systemd[1]: Started libpod-conmon-4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa.scope.
Jan 22 10:00:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:44 np0005592157 podman[323393]: 2026-01-22 15:00:44.663548742 +0000 UTC m=+0.037715642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db768a995d18277f9098f6d254fef3f5617722b57ce8efa16bcfc946f6a55691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db768a995d18277f9098f6d254fef3f5617722b57ce8efa16bcfc946f6a55691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db768a995d18277f9098f6d254fef3f5617722b57ce8efa16bcfc946f6a55691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db768a995d18277f9098f6d254fef3f5617722b57ce8efa16bcfc946f6a55691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:44 np0005592157 podman[323393]: 2026-01-22 15:00:44.7740697 +0000 UTC m=+0.148236600 container init 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:00:44 np0005592157 podman[323393]: 2026-01-22 15:00:44.788392244 +0000 UTC m=+0.162559184 container start 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:00:44 np0005592157 podman[323393]: 2026-01-22 15:00:44.794168757 +0000 UTC m=+0.168335677 container attach 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:00:44 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 10:00:45 np0005592157 eager_villani[323409]: {
Jan 22 10:00:45 np0005592157 eager_villani[323409]:    "0": [
Jan 22 10:00:45 np0005592157 eager_villani[323409]:        {
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "devices": [
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "/dev/loop3"
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            ],
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "lv_name": "ceph_lv0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "lv_size": "7511998464",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "name": "ceph_lv0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "tags": {
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.cluster_name": "ceph",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.crush_device_class": "",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.encrypted": "0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.osd_id": "0",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.type": "block",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:                "ceph.vdo": "0"
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            },
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "type": "block",
Jan 22 10:00:45 np0005592157 eager_villani[323409]:            "vg_name": "ceph_vg0"
Jan 22 10:00:45 np0005592157 eager_villani[323409]:        }
Jan 22 10:00:45 np0005592157 eager_villani[323409]:    ]
Jan 22 10:00:45 np0005592157 eager_villani[323409]: }
Jan 22 10:00:45 np0005592157 systemd[1]: libpod-4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa.scope: Deactivated successfully.
Jan 22 10:00:45 np0005592157 podman[323393]: 2026-01-22 15:00:45.585456302 +0000 UTC m=+0.959623232 container died 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:00:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-db768a995d18277f9098f6d254fef3f5617722b57ce8efa16bcfc946f6a55691-merged.mount: Deactivated successfully.
Jan 22 10:00:45 np0005592157 podman[323393]: 2026-01-22 15:00:45.640292526 +0000 UTC m=+1.014459416 container remove 4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:00:45 np0005592157 systemd[1]: libpod-conmon-4928701c617031f0b8ccbea0383b315c67add4f5e0e7f47cb9c2665fce681faa.scope: Deactivated successfully.
Jan 22 10:00:45 np0005592157 podman[323418]: 2026-01-22 15:00:45.679373921 +0000 UTC m=+0.065871398 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 10:00:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:45.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:45 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592157 ceph-mon[74359]: Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.375919987 +0000 UTC m=+0.048127519 container create 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:00:46 np0005592157 systemd[1]: Started libpod-conmon-8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61.scope.
Jan 22 10:00:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.359459221 +0000 UTC m=+0.031666763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.458355572 +0000 UTC m=+0.130563104 container init 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.466953734 +0000 UTC m=+0.139161266 container start 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.470680127 +0000 UTC m=+0.142887669 container attach 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:00:46 np0005592157 heuristic_rubin[323603]: 167 167
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.472993174 +0000 UTC m=+0.145200706 container died 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:00:46 np0005592157 systemd[1]: libpod-8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61.scope: Deactivated successfully.
Jan 22 10:00:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b26a86c532e1770245dfdfb30aed12ac177834eb510f7392fedc4ea886a3569b-merged.mount: Deactivated successfully.
Jan 22 10:00:46 np0005592157 podman[323587]: 2026-01-22 15:00:46.507497315 +0000 UTC m=+0.179704837 container remove 8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rubin, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:00:46 np0005592157 systemd[1]: libpod-conmon-8ca0c99f0dd1160168865d2cc5cf41b07824bac2d49541d20df005d0c6f4ea61.scope: Deactivated successfully.
Jan 22 10:00:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:00:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:00:46 np0005592157 podman[323626]: 2026-01-22 15:00:46.72760685 +0000 UTC m=+0.053252736 container create dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:00:46 np0005592157 systemd[1]: Started libpod-conmon-dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868.scope.
Jan 22 10:00:46 np0005592157 podman[323626]: 2026-01-22 15:00:46.707653737 +0000 UTC m=+0.033299643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:00:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:00:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ae2cef2f41f001be8316637e1d4c1a710e3d88b007fb3f12a52ee2056879fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ae2cef2f41f001be8316637e1d4c1a710e3d88b007fb3f12a52ee2056879fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ae2cef2f41f001be8316637e1d4c1a710e3d88b007fb3f12a52ee2056879fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ae2cef2f41f001be8316637e1d4c1a710e3d88b007fb3f12a52ee2056879fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:00:46 np0005592157 podman[323626]: 2026-01-22 15:00:46.832324405 +0000 UTC m=+0.157970321 container init dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:00:46 np0005592157 podman[323626]: 2026-01-22 15:00:46.842970728 +0000 UTC m=+0.168616604 container start dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 10:00:46 np0005592157 podman[323626]: 2026-01-22 15:00:46.847190962 +0000 UTC m=+0.172836868 container attach dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:00:47 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:00:47
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:00:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:47.636 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:00:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:47.638 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:00:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:00:47.639 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:00:47 np0005592157 amazing_curie[323643]: {
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:        "osd_id": 0,
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:        "type": "bluestore"
Jan 22 10:00:47 np0005592157 amazing_curie[323643]:    }
Jan 22 10:00:47 np0005592157 amazing_curie[323643]: }
Jan 22 10:00:47 np0005592157 systemd[1]: libpod-dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868.scope: Deactivated successfully.
Jan 22 10:00:47 np0005592157 podman[323626]: 2026-01-22 15:00:47.80875822 +0000 UTC m=+1.134404146 container died dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:00:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-79ae2cef2f41f001be8316637e1d4c1a710e3d88b007fb3f12a52ee2056879fe-merged.mount: Deactivated successfully.
Jan 22 10:00:47 np0005592157 podman[323626]: 2026-01-22 15:00:47.888822537 +0000 UTC m=+1.214468453 container remove dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:00:47 np0005592157 systemd[1]: libpod-conmon-dbc0b3bdc43bba8d0b2542588aee24bc467d8a1244ea6f5fc0bb5c004ca5e868.scope: Deactivated successfully.
Jan 22 10:00:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:00:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:00:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fd7eef86-e5ad-449f-bc9a-32104e6079a0 does not exist
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 528255e3-bcfc-4b61-b17e-1e6f11809097 does not exist
Jan 22 10:00:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c3b110c3-a1d2-4774-afb6-c9e9a7697a02 does not exist
Jan 22 10:00:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:47.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:48 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 255 B/s wr, 5 op/s
Jan 22 10:00:49 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:49.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:50 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:50 np0005592157 ceph-mon[74359]: Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 10:00:51 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:51.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:52 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:53 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:54 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:54.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:55 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:55 np0005592157 ceph-mon[74359]: Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:55 np0005592157 podman[323735]: 2026-01-22 15:00:55.390366264 +0000 UTC m=+0.117820789 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 22 10:00:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:56 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:57 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:00:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:57.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:00:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:58 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:00:59 np0005592157 ceph-mon[74359]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:00:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:00 np0005592157 ceph-mon[74359]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:00 np0005592157 ceph-mon[74359]: Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:01 np0005592157 ceph-mon[74359]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:01.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:02 np0005592157 ceph-mon[74359]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:03 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:04 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008133476991438674 of space, bias 1.0, pg target 0.24075091894658476 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:01:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:01:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:05 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:05 np0005592157 ceph-mon[74359]: Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:06 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:07 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:08.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:08 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:09 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:01:09 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:01:09 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:10.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:10 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:11 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:12 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:14 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:14.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:15 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:16 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:16 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592157 podman[323840]: 2026-01-22 15:01:16.355471207 +0000 UTC m=+0.083127403 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:01:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:17 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:18 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 22 10:01:19 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:20 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:20 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:20.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:01:21 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:22.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:22 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:01:23 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:24 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:01:25 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:25 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:26 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:26 np0005592157 podman[323915]: 2026-01-22 15:01:26.379236232 +0000 UTC m=+0.122045834 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 10:01:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 22 KiB/s wr, 7 op/s
Jan 22 10:01:27 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:28 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:28.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 22 10:01:29 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:30 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:30 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 18 op/s
Jan 22 10:01:31 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:32.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:32 np0005592157 ceph-mon[74359]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 10:01:33 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:34.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:34 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 10:01:36 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:36 np0005592157 ceph-mon[74359]: Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:36.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:37 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 10:01:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:38.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:38 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 597 B/s wr, 12 op/s
Jan 22 10:01:39 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:40.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 10:01:41 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:41 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:42.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:42 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:43 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:44.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:44 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:44.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:45 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:45 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:46.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:46 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:01:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:01:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:47 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:47 np0005592157 podman[324001]: 2026-01-22 15:01:47.314329022 +0000 UTC m=+0.055494191 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:01:47
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 22 10:01:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:01:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:47.637 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:01:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:47.638 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:01:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:47.638 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:01:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:48.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:48 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:48.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 12ee3f77-3e19-48ce-9ea3-06113cb9e472 does not exist
Jan 22 10:01:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d0080562-118f-4c25-b64e-64115175cad1 does not exist
Jan 22 10:01:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1d6530fd-8938-40f3-87a0-52afa83e800a does not exist
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:01:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:50.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:01:50 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:50.436 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:01:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:50.439 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:01:50 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:01:50.440 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.479161934 +0000 UTC m=+0.059534581 container create a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:01:50 np0005592157 systemd[1]: Started libpod-conmon-a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff.scope.
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.451709916 +0000 UTC m=+0.032082683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.581817729 +0000 UTC m=+0.162190396 container init a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.588637607 +0000 UTC m=+0.169010294 container start a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.592540183 +0000 UTC m=+0.172912830 container attach a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:01:50 np0005592157 clever_turing[324311]: 167 167
Jan 22 10:01:50 np0005592157 systemd[1]: libpod-a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff.scope: Deactivated successfully.
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.596177263 +0000 UTC m=+0.176549920 container died a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:01:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8e0a77ec1818529ec44d69616ade7310e05947be2afeb75af4f2a0e3246ec56c-merged.mount: Deactivated successfully.
Jan 22 10:01:50 np0005592157 podman[324295]: 2026-01-22 15:01:50.639297158 +0000 UTC m=+0.219669815 container remove a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:01:50 np0005592157 systemd[1]: libpod-conmon-a4c8bbc10096858130b47312ac0525b3376c8a0684ee9056f1997c896f285fff.scope: Deactivated successfully.
Jan 22 10:01:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:50.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:50 np0005592157 podman[324335]: 2026-01-22 15:01:50.854397898 +0000 UTC m=+0.040649855 container create ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:01:50 np0005592157 systemd[1]: Started libpod-conmon-ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39.scope.
Jan 22 10:01:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:50 np0005592157 podman[324335]: 2026-01-22 15:01:50.837610354 +0000 UTC m=+0.023862331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:50 np0005592157 podman[324335]: 2026-01-22 15:01:50.956272783 +0000 UTC m=+0.142524770 container init ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 22 10:01:50 np0005592157 podman[324335]: 2026-01-22 15:01:50.974490793 +0000 UTC m=+0.160742800 container start ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:01:50 np0005592157 podman[324335]: 2026-01-22 15:01:50.979161048 +0000 UTC m=+0.165413045 container attach ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:01:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 10:01:51 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:51 np0005592157 stupefied_nash[324351]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:01:51 np0005592157 stupefied_nash[324351]: --> relative data size: 1.0
Jan 22 10:01:51 np0005592157 stupefied_nash[324351]: --> All data devices are unavailable
Jan 22 10:01:51 np0005592157 systemd[1]: libpod-ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39.scope: Deactivated successfully.
Jan 22 10:01:51 np0005592157 podman[324335]: 2026-01-22 15:01:51.767648953 +0000 UTC m=+0.953900920 container died ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:01:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-95fc0d9e7c97cf5481fbfd03d0cb49153adbdc91a55ab11426c533e966bc7321-merged.mount: Deactivated successfully.
Jan 22 10:01:51 np0005592157 podman[324335]: 2026-01-22 15:01:51.829791567 +0000 UTC m=+1.016043524 container remove ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:01:51 np0005592157 systemd[1]: libpod-conmon-ea36837319e5ad5fd5adea81587614246a7ec878b07099dbe02588c8c6eb4d39.scope: Deactivated successfully.
Jan 22 10:01:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:52.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:52 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.457840083 +0000 UTC m=+0.035594600 container create 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:01:52 np0005592157 systemd[1]: Started libpod-conmon-660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be.scope.
Jan 22 10:01:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.538031643 +0000 UTC m=+0.115786160 container init 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.44234223 +0000 UTC m=+0.020096767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.544473902 +0000 UTC m=+0.122228429 container start 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:01:52 np0005592157 exciting_nobel[324538]: 167 167
Jan 22 10:01:52 np0005592157 systemd[1]: libpod-660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be.scope: Deactivated successfully.
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.548185423 +0000 UTC m=+0.125939950 container attach 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.548746047 +0000 UTC m=+0.126500574 container died 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:01:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-417dedf03034762125524a2252ce11d7b141cbcf473ea6720c2f6cce4b6b1992-merged.mount: Deactivated successfully.
Jan 22 10:01:52 np0005592157 podman[324522]: 2026-01-22 15:01:52.582794298 +0000 UTC m=+0.160548815 container remove 660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:01:52 np0005592157 systemd[1]: libpod-conmon-660b0b2d8922f25f2f8a59c4d5145163f82f2c8b85c201476914a1dfe492f6be.scope: Deactivated successfully.
Jan 22 10:01:52 np0005592157 podman[324559]: 2026-01-22 15:01:52.729761166 +0000 UTC m=+0.039364693 container create b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:01:52 np0005592157 systemd[1]: Started libpod-conmon-b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4.scope.
Jan 22 10:01:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:52.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:52 np0005592157 podman[324559]: 2026-01-22 15:01:52.71169738 +0000 UTC m=+0.021300957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3503af5224583a922d32a538e8e5aeaa8d802ecf51a0ee6c6cebfa91759cd40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3503af5224583a922d32a538e8e5aeaa8d802ecf51a0ee6c6cebfa91759cd40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3503af5224583a922d32a538e8e5aeaa8d802ecf51a0ee6c6cebfa91759cd40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3503af5224583a922d32a538e8e5aeaa8d802ecf51a0ee6c6cebfa91759cd40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:52 np0005592157 podman[324559]: 2026-01-22 15:01:52.838693555 +0000 UTC m=+0.148297172 container init b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:01:52 np0005592157 podman[324559]: 2026-01-22 15:01:52.845701288 +0000 UTC m=+0.155304815 container start b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:01:52 np0005592157 podman[324559]: 2026-01-22 15:01:52.850420995 +0000 UTC m=+0.160024552 container attach b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:01:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 10:01:53 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:53 np0005592157 frosty_wing[324576]: {
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:    "0": [
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:        {
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "devices": [
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "/dev/loop3"
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            ],
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "lv_name": "ceph_lv0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "lv_size": "7511998464",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "name": "ceph_lv0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "tags": {
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.cluster_name": "ceph",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.crush_device_class": "",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.encrypted": "0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.osd_id": "0",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.type": "block",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:                "ceph.vdo": "0"
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            },
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "type": "block",
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:            "vg_name": "ceph_vg0"
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:        }
Jan 22 10:01:53 np0005592157 frosty_wing[324576]:    ]
Jan 22 10:01:53 np0005592157 frosty_wing[324576]: }
Jan 22 10:01:53 np0005592157 systemd[1]: libpod-b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4.scope: Deactivated successfully.
Jan 22 10:01:53 np0005592157 podman[324559]: 2026-01-22 15:01:53.642866859 +0000 UTC m=+0.952470396 container died b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:01:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a3503af5224583a922d32a538e8e5aeaa8d802ecf51a0ee6c6cebfa91759cd40-merged.mount: Deactivated successfully.
Jan 22 10:01:53 np0005592157 podman[324559]: 2026-01-22 15:01:53.708791476 +0000 UTC m=+1.018395003 container remove b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wing, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:01:53 np0005592157 systemd[1]: libpod-conmon-b9759c2470efea2ccae2ffdeea1acaf356765f608ce896005a633a26a81e5ce4.scope: Deactivated successfully.
Jan 22 10:01:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:54.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.271707374 +0000 UTC m=+0.047305179 container create 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:01:54 np0005592157 systemd[1]: Started libpod-conmon-2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a.scope.
Jan 22 10:01:54 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.249045044 +0000 UTC m=+0.024642869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.363948101 +0000 UTC m=+0.139545956 container init 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.374680926 +0000 UTC m=+0.150278741 container start 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.378732066 +0000 UTC m=+0.154329901 container attach 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:01:54 np0005592157 focused_dhawan[324757]: 167 167
Jan 22 10:01:54 np0005592157 systemd[1]: libpod-2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a.scope: Deactivated successfully.
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.380555101 +0000 UTC m=+0.156152906 container died 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:01:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6da6feec92b2ec119618cc3e35f2c54d277a7f389d632f9754d1ed8432127ed7-merged.mount: Deactivated successfully.
Jan 22 10:01:54 np0005592157 podman[324741]: 2026-01-22 15:01:54.425541362 +0000 UTC m=+0.201139207 container remove 2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 10:01:54 np0005592157 systemd[1]: libpod-conmon-2843e7bbab97b366f3c5fafb3c3dd566da19b7f83be2b695e20c069ee66eb98a.scope: Deactivated successfully.
Jan 22 10:01:54 np0005592157 podman[324782]: 2026-01-22 15:01:54.609738449 +0000 UTC m=+0.051939503 container create cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:01:54 np0005592157 systemd[1]: Started libpod-conmon-cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb.scope.
Jan 22 10:01:54 np0005592157 podman[324782]: 2026-01-22 15:01:54.592202036 +0000 UTC m=+0.034403100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:01:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:01:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6765f9c107f2632021995a60ede2dc3a4f8fb1d3e630edc6a0dd7c453a110679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6765f9c107f2632021995a60ede2dc3a4f8fb1d3e630edc6a0dd7c453a110679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6765f9c107f2632021995a60ede2dc3a4f8fb1d3e630edc6a0dd7c453a110679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6765f9c107f2632021995a60ede2dc3a4f8fb1d3e630edc6a0dd7c453a110679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:01:54 np0005592157 podman[324782]: 2026-01-22 15:01:54.708424525 +0000 UTC m=+0.150625569 container init cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:01:54 np0005592157 podman[324782]: 2026-01-22 15:01:54.713776417 +0000 UTC m=+0.155977461 container start cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:01:54 np0005592157 podman[324782]: 2026-01-22 15:01:54.716776452 +0000 UTC m=+0.158977496 container attach cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:01:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 426 B/s wr, 1 op/s
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]: {
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:        "osd_id": 0,
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:        "type": "bluestore"
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]:    }
Jan 22 10:01:55 np0005592157 sad_hamilton[324799]: }
Jan 22 10:01:55 np0005592157 systemd[1]: libpod-cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb.scope: Deactivated successfully.
Jan 22 10:01:55 np0005592157 podman[324782]: 2026-01-22 15:01:55.487369795 +0000 UTC m=+0.929570839 container died cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:01:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6765f9c107f2632021995a60ede2dc3a4f8fb1d3e630edc6a0dd7c453a110679-merged.mount: Deactivated successfully.
Jan 22 10:01:55 np0005592157 podman[324782]: 2026-01-22 15:01:55.544890785 +0000 UTC m=+0.987091849 container remove cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:01:55 np0005592157 systemd[1]: libpod-conmon-cece834dd187a473ba36fd86e66c542839fac40e97890f961c5df8ffc6bc26eb.scope: Deactivated successfully.
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:01:55 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c621bb7d-9c6a-44b2-9887-ddd7fed7cdcd does not exist
Jan 22 10:01:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8401dc0c-0536-426f-bc61-21d8867bcc10 does not exist
Jan 22 10:01:55 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4f254e29-e15a-4832-b173-dcd0c132daef does not exist
Jan 22 10:01:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:56.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592157 ceph-mon[74359]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:01:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 17K writes, 79K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1904 writes, 8758 keys, 1903 commit groups, 1.0 writes per commit group, ingest: 11.02 MB, 0.02 MB/s#012Interval WAL: 1904 writes, 1903 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     62.2      1.46              0.40        55    0.027       0      0       0.0       0.0#012  L6      1/0    8.76 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.6    108.9     93.8      5.43              1.95        54    0.100    507K    29K       0.0       0.0#012 Sum      1/0    8.76 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.6     85.9     87.1      6.88              2.35       109    0.063    507K    29K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9     57.9     57.6      1.34              0.31        14    0.096     92K   3594       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    108.9     93.8      5.43              1.95        54    0.100    507K    29K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     62.4      1.45              0.40        54    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.089, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.59 GB write, 0.11 MB/s write, 0.58 GB read, 0.11 MB/s read, 6.9 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 63.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000281 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3362,59.90 MB,19.7027%) FilterBlock(110,1.36 MB,0.446555%) IndexBlock(110,1.79 MB,0.589205%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:01:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 10:01:57 np0005592157 podman[324886]: 2026-01-22 15:01:57.361804331 +0000 UTC m=+0.101056696 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:01:57 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:01:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:58.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:01:58 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:01:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:58.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 10:01:59 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:00.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:00 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592157 ceph-mon[74359]: Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:00.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.906251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121906310, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2210, "num_deletes": 251, "total_data_size": 3193522, "memory_usage": 3253024, "flush_reason": "Manual Compaction"}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121929283, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 3108278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77694, "largest_seqno": 79903, "table_properties": {"data_size": 3099212, "index_size": 5239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23322, "raw_average_key_size": 21, "raw_value_size": 3079129, "raw_average_value_size": 2822, "num_data_blocks": 225, "num_entries": 1091, "num_filter_entries": 1091, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093958, "oldest_key_time": 1769093958, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 23160 microseconds, and 6330 cpu microseconds.
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.929411) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 3108278 bytes OK
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.929453) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.931190) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.931204) EVENT_LOG_v1 {"time_micros": 1769094121931199, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.931219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 3184209, prev total WAL file size 3184209, number of live WAL files 2.
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.932290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(3035KB)], [179(8966KB)]
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121932324, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 12289650, "oldest_snapshot_seqno": -1}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 13399 keys, 10597107 bytes, temperature: kUnknown
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121989618, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10597107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10525690, "index_size": 36808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 368931, "raw_average_key_size": 27, "raw_value_size": 10299083, "raw_average_value_size": 768, "num_data_blocks": 1325, "num_entries": 13399, "num_filter_entries": 13399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989964) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10597107 bytes
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.991607) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.1 rd, 184.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 13916, records dropped: 517 output_compression: NoCompression
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.991632) EVENT_LOG_v1 {"time_micros": 1769094121991621, "job": 112, "event": "compaction_finished", "compaction_time_micros": 57391, "compaction_time_cpu_micros": 25329, "output_level": 6, "num_output_files": 1, "total_output_size": 10597107, "num_input_records": 13916, "num_output_records": 13399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121992338, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121994625, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.932174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.994716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.994723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.994724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.994726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:02:01.994727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:02 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 10:02:03 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0008140747138470129 of space, bias 1.0, pg target 0.2409661152987158 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:02:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:02:05 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 10:02:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:06 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:06 np0005592157 ceph-mon[74359]: Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:06.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 10:02:07 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:07 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:08 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:08.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:09 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:10.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:11 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:11 np0005592157 ceph-mon[74359]: Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:12 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:12.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:12.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:13 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:13 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:14 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:14.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:15 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:15 np0005592157 ceph-mon[74359]: Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:16.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:16 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:17 np0005592157 podman[324997]: 2026-01-22 15:02:17.792702946 +0000 UTC m=+0.073078836 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 10:02:18 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:18.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:02:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:02:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:02:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:02:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:18.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:19 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:20.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:20 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:20 np0005592157 ceph-mon[74359]: Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:20.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:22 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:23 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:23 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:24 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:25 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:25 np0005592157 ceph-mon[74359]: Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:26 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:28 np0005592157 podman[325046]: 2026-01-22 15:02:28.379912059 +0000 UTC m=+0.112724794 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:02:28 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:29 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:29 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:30 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:31 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:32.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:33 np0005592157 ceph-mon[74359]: 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:02:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:02:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:34 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 10:02:35 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:35 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:36 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 10:02:37 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:38 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 10:02:39 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:40.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:40 np0005592157 ceph-mon[74359]: 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:40 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 10:02:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:42 np0005592157 ceph-mon[74359]: 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:02:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 10:02:43 np0005592157 ceph-mon[74359]: 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:02:43 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:44 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 740 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 874 KiB/s wr, 26 op/s
Jan 22 10:02:45 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:45 np0005592157 ceph-mon[74359]: Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:02:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:02:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 22 10:02:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:47 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:02:47
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', 'images', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Jan 22 10:02:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:02:47.638 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:02:47.639 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:02:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:02:47.639 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:02:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 10:02:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 22 10:02:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:48 np0005592157 podman[325132]: 2026-01-22 15:02:48.342486952 +0000 UTC m=+0.073106536 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:02:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 22 10:02:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 10:02:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 22 10:02:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 2.7 MiB/s wr, 28 op/s
Jan 22 10:02:49 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:49 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 33 op/s
Jan 22 10:02:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 10:02:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 22 10:02:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:52 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:52 np0005592157 ceph-mon[74359]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:02:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 852 B/s wr, 7 op/s
Jan 22 10:02:53 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:54 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 782 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 678 KiB/s wr, 10 op/s
Jan 22 10:02:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:02:55.270 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:02:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:02:55.271 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:02:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:56.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:56 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:56 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 10:02:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:02:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:02:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 32c97a93-872e-474b-bcd3-2a733cfb64eb does not exist
Jan 22 10:02:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a978e5a7-ed7d-4797-bde0-7769cc096491 does not exist
Jan 22 10:02:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 98e72358-ebd1-4243-9dc5-63a5f629060f does not exist
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:02:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.090157171 +0000 UTC m=+0.049813221 container create 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 10:02:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:58 np0005592157 systemd[1]: Started libpod-conmon-655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42.scope.
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.069699276 +0000 UTC m=+0.029355356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:02:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.189638447 +0000 UTC m=+0.149294497 container init 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.20231053 +0000 UTC m=+0.161966600 container start 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:02:58 np0005592157 intelligent_goldberg[325493]: 167 167
Jan 22 10:02:58 np0005592157 systemd[1]: libpod-655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42.scope: Deactivated successfully.
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.207652692 +0000 UTC m=+0.167308772 container attach 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.20799856 +0000 UTC m=+0.167654600 container died 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:02:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5825657aa90582deaad7292fc44c4aec0a23960dd922145bc43a76881379899a-merged.mount: Deactivated successfully.
Jan 22 10:02:58 np0005592157 podman[325467]: 2026-01-22 15:02:58.251910275 +0000 UTC m=+0.211566305 container remove 655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldberg, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:02:58 np0005592157 systemd[1]: libpod-conmon-655083928c7a45dca6971bf08b2d4c7e42da6890315e0314f823624e1982be42.scope: Deactivated successfully.
Jan 22 10:02:58 np0005592157 podman[325518]: 2026-01-22 15:02:58.450538898 +0000 UTC m=+0.049820631 container create 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 10:02:58 np0005592157 systemd[1]: Started libpod-conmon-191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9.scope.
Jan 22 10:02:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:02:58 np0005592157 podman[325518]: 2026-01-22 15:02:58.421505492 +0000 UTC m=+0.020787245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:02:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:02:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:02:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:02:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:02:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:02:58 np0005592157 podman[325518]: 2026-01-22 15:02:58.539790582 +0000 UTC m=+0.139072365 container init 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:02:58 np0005592157 podman[325518]: 2026-01-22 15:02:58.553724336 +0000 UTC m=+0.153006069 container start 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:02:58 np0005592157 podman[325518]: 2026-01-22 15:02:58.557484429 +0000 UTC m=+0.156766192 container attach 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:02:58 np0005592157 podman[325532]: 2026-01-22 15:02:58.595089137 +0000 UTC m=+0.111495344 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:02:58 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:02:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:02:58 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:02:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 22 10:02:59 np0005592157 nice_gauss[325540]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:02:59 np0005592157 nice_gauss[325540]: --> relative data size: 1.0
Jan 22 10:02:59 np0005592157 nice_gauss[325540]: --> All data devices are unavailable
Jan 22 10:02:59 np0005592157 systemd[1]: libpod-191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9.scope: Deactivated successfully.
Jan 22 10:02:59 np0005592157 podman[325518]: 2026-01-22 15:02:59.404225253 +0000 UTC m=+1.003506996 container died 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:02:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9db06ace5002f612a9210d049db43462d9f806d413dbbf2c1cf88dd09112fc25-merged.mount: Deactivated successfully.
Jan 22 10:02:59 np0005592157 podman[325518]: 2026-01-22 15:02:59.465034984 +0000 UTC m=+1.064316727 container remove 191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:02:59 np0005592157 systemd[1]: libpod-conmon-191ff69abfe2b100c9b0cfc53531d540f1daaefa3ab5c8008e108e86fae704a9.scope: Deactivated successfully.
Jan 22 10:02:59 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 22 10:03:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.188674429 +0000 UTC m=+0.055219885 container create 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:03:00 np0005592157 systemd[1]: Started libpod-conmon-53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6.scope.
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.162313618 +0000 UTC m=+0.028859164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:03:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.294683246 +0000 UTC m=+0.161228722 container init 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.303139154 +0000 UTC m=+0.169684600 container start 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.306712953 +0000 UTC m=+0.173258419 container attach 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:03:00 np0005592157 silly_williamson[325742]: 167 167
Jan 22 10:03:00 np0005592157 systemd[1]: libpod-53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6.scope: Deactivated successfully.
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.308200169 +0000 UTC m=+0.174745655 container died 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:03:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-28cd1beb9f8790b729257cbb1dcb59c8988fb0cf7516810aff1b249b2534809b-merged.mount: Deactivated successfully.
Jan 22 10:03:00 np0005592157 podman[325726]: 2026-01-22 15:03:00.351239772 +0000 UTC m=+0.217785218 container remove 53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williamson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:03:00 np0005592157 systemd[1]: libpod-conmon-53d5988cdf834254da3ee9c306f5a0a374752ca9d5861f2760317d9bb03f0af6.scope: Deactivated successfully.
Jan 22 10:03:00 np0005592157 podman[325764]: 2026-01-22 15:03:00.604869274 +0000 UTC m=+0.085594135 container create 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:03:00 np0005592157 systemd[1]: Started libpod-conmon-1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28.scope.
Jan 22 10:03:00 np0005592157 podman[325764]: 2026-01-22 15:03:00.569562402 +0000 UTC m=+0.050287313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:03:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:03:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2b22a9f13d461a27c7e3b5549b8df9ddc4f1f77fcded92e6946a679a8bc5c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2b22a9f13d461a27c7e3b5549b8df9ddc4f1f77fcded92e6946a679a8bc5c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2b22a9f13d461a27c7e3b5549b8df9ddc4f1f77fcded92e6946a679a8bc5c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2b22a9f13d461a27c7e3b5549b8df9ddc4f1f77fcded92e6946a679a8bc5c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:00 np0005592157 podman[325764]: 2026-01-22 15:03:00.731820638 +0000 UTC m=+0.212545529 container init 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:03:00 np0005592157 podman[325764]: 2026-01-22 15:03:00.744831729 +0000 UTC m=+0.225556590 container start 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:03:00 np0005592157 podman[325764]: 2026-01-22 15:03:00.749038593 +0000 UTC m=+0.229763424 container attach 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 22 op/s
Jan 22 10:03:01 np0005592157 confident_bassi[325782]: {
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:    "0": [
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:        {
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "devices": [
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "/dev/loop3"
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            ],
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "lv_name": "ceph_lv0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "lv_size": "7511998464",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "name": "ceph_lv0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "tags": {
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.cluster_name": "ceph",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.crush_device_class": "",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.encrypted": "0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.osd_id": "0",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.type": "block",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:                "ceph.vdo": "0"
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            },
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "type": "block",
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:            "vg_name": "ceph_vg0"
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:        }
Jan 22 10:03:01 np0005592157 confident_bassi[325782]:    ]
Jan 22 10:03:01 np0005592157 confident_bassi[325782]: }
Jan 22 10:03:01 np0005592157 systemd[1]: libpod-1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28.scope: Deactivated successfully.
Jan 22 10:03:01 np0005592157 podman[325764]: 2026-01-22 15:03:01.532586417 +0000 UTC m=+1.013311278 container died 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:03:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ea2b22a9f13d461a27c7e3b5549b8df9ddc4f1f77fcded92e6946a679a8bc5c0-merged.mount: Deactivated successfully.
Jan 22 10:03:01 np0005592157 podman[325764]: 2026-01-22 15:03:01.604291667 +0000 UTC m=+1.085016528 container remove 1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bassi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:03:01 np0005592157 systemd[1]: libpod-conmon-1bbf2dca4fef6f5dbf16c206b014730fe0d1c2de1c212b1941bf8727d30dea28.scope: Deactivated successfully.
Jan 22 10:03:01 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.313390634 +0000 UTC m=+0.054023945 container create fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 22 10:03:02 np0005592157 systemd[1]: Started libpod-conmon-fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c.scope.
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.286463299 +0000 UTC m=+0.027096680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:03:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.397050559 +0000 UTC m=+0.137683880 container init fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.404335159 +0000 UTC m=+0.144968460 container start fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.407965799 +0000 UTC m=+0.148603910 container attach fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 22 10:03:02 np0005592157 reverent_pascal[325960]: 167 167
Jan 22 10:03:02 np0005592157 systemd[1]: libpod-fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c.scope: Deactivated successfully.
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.409032865 +0000 UTC m=+0.149666216 container died fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:03:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-da4aa1c4823980dfe8dff97f5552173aad69ca53e266fe0d236e1f823df37900-merged.mount: Deactivated successfully.
Jan 22 10:03:02 np0005592157 podman[325944]: 2026-01-22 15:03:02.449107054 +0000 UTC m=+0.189740355 container remove fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pascal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:03:02 np0005592157 systemd[1]: libpod-conmon-fb01a841d03e6bb660630f9feaa977daa2194e278ae1764c44c69b5ef046dc9c.scope: Deactivated successfully.
Jan 22 10:03:02 np0005592157 podman[325983]: 2026-01-22 15:03:02.631413645 +0000 UTC m=+0.045654858 container create 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:03:02 np0005592157 systemd[1]: Started libpod-conmon-68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56.scope.
Jan 22 10:03:02 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:03:02 np0005592157 podman[325983]: 2026-01-22 15:03:02.608386627 +0000 UTC m=+0.022627870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:03:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f548ef42b5c4ea0bd70dcf40b56ebe34d501b9934d384c1af71fdc447bdf3a51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f548ef42b5c4ea0bd70dcf40b56ebe34d501b9934d384c1af71fdc447bdf3a51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f548ef42b5c4ea0bd70dcf40b56ebe34d501b9934d384c1af71fdc447bdf3a51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:02 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f548ef42b5c4ea0bd70dcf40b56ebe34d501b9934d384c1af71fdc447bdf3a51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:03:02 np0005592157 podman[325983]: 2026-01-22 15:03:02.71990933 +0000 UTC m=+0.134150533 container init 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:03:02 np0005592157 podman[325983]: 2026-01-22 15:03:02.733081165 +0000 UTC m=+0.147322358 container start 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:03:02 np0005592157 podman[325983]: 2026-01-22 15:03:02.73735213 +0000 UTC m=+0.151593353 container attach 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:03:02 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:02.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Jan 22 10:03:03 np0005592157 confident_panini[325999]: {
Jan 22 10:03:03 np0005592157 confident_panini[325999]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:03:03 np0005592157 confident_panini[325999]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:03:03 np0005592157 confident_panini[325999]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:03:03 np0005592157 confident_panini[325999]:        "osd_id": 0,
Jan 22 10:03:03 np0005592157 confident_panini[325999]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:03:03 np0005592157 confident_panini[325999]:        "type": "bluestore"
Jan 22 10:03:03 np0005592157 confident_panini[325999]:    }
Jan 22 10:03:03 np0005592157 confident_panini[325999]: }
Jan 22 10:03:03 np0005592157 systemd[1]: libpod-68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56.scope: Deactivated successfully.
Jan 22 10:03:03 np0005592157 podman[325983]: 2026-01-22 15:03:03.674826814 +0000 UTC m=+1.089068037 container died 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:03:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f548ef42b5c4ea0bd70dcf40b56ebe34d501b9934d384c1af71fdc447bdf3a51-merged.mount: Deactivated successfully.
Jan 22 10:03:03 np0005592157 podman[325983]: 2026-01-22 15:03:03.737209014 +0000 UTC m=+1.151450217 container remove 68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:03:03 np0005592157 systemd[1]: libpod-conmon-68918b6dfc467e71f6148193940ab03e5e612efe88e1bc0c254d80b0460ecd56.scope: Deactivated successfully.
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 87a757dc-3f95-4d74-a3d1-05996d0c75ba does not exist
Jan 22 10:03:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 61eaf7bb-e7a0-44e5-85dc-9e6c1beaaa95 does not exist
Jan 22 10:03:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9491090f-3001-46de-9f89-756d321abae0 does not exist
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:04.275 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001802996463800484 of space, bias 1.0, pg target 0.5336869532849433 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:03:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:03:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:04 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 22 10:03:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:06 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:06 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:07 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:07 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:08.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:08 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:08.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:09 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:10.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:10 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 185 B/s rd, 92 B/s wr, 0 op/s
Jan 22 10:03:11 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:12.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:12 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:12.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 10:03:13 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:14.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:14 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 10:03:15 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:15 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:16.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:16 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 10:03:17 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:18 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 10:03:19 np0005592157 podman[326139]: 2026-01-22 15:03:19.334692303 +0000 UTC m=+0.074029499 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:03:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:20 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:21 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:21 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 10:03:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:22 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 10:03:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:24.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:24 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:24.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 10:03:25 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:25 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:26.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:26 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:26.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:27 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:28.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:28 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 22 10:03:29 np0005592157 podman[326165]: 2026-01-22 15:03:29.402235868 +0000 UTC m=+0.137283010 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:03:29 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.086294) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210086345, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1263, "num_deletes": 257, "total_data_size": 1697898, "memory_usage": 1729184, "flush_reason": "Manual Compaction"}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210096230, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1671345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79904, "largest_seqno": 81166, "table_properties": {"data_size": 1665616, "index_size": 2868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14497, "raw_average_key_size": 20, "raw_value_size": 1653115, "raw_average_value_size": 2361, "num_data_blocks": 124, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094122, "oldest_key_time": 1769094122, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 9991 microseconds, and 3985 cpu microseconds.
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.096287) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1671345 bytes OK
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.096303) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.098170) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.098188) EVENT_LOG_v1 {"time_micros": 1769094210098183, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.098208) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1692064, prev total WAL file size 1692064, number of live WAL files 2.
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.098905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303232' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end)
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1632KB)], [182(10MB)]
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210098982, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12268452, "oldest_snapshot_seqno": -1}
Jan 22 10:03:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 13568 keys, 12121884 bytes, temperature: kUnknown
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210188174, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 12121884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12047841, "index_size": 39050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33925, "raw_key_size": 374107, "raw_average_key_size": 27, "raw_value_size": 11816637, "raw_average_value_size": 870, "num_data_blocks": 1415, "num_entries": 13568, "num_filter_entries": 13568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.188653) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12121884 bytes
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.190712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.4 rd, 135.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 14099, records dropped: 531 output_compression: NoCompression
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.190750) EVENT_LOG_v1 {"time_micros": 1769094210190733, "job": 114, "event": "compaction_finished", "compaction_time_micros": 89310, "compaction_time_cpu_micros": 39636, "output_level": 6, "num_output_files": 1, "total_output_size": 12121884, "num_input_records": 14099, "num_output_records": 13568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210191541, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210195912, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.098823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.196026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.196035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.196039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.196043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:03:30.196047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:30.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:03:31 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:32 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:03:33 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:33.238 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:03:33 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:33.238 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:03:33 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:34 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:34.241 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:34 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:03:35 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:35 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:36.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:36 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:36.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:03:37 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:38.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:38 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:38.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:03:39 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:40.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:40 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 22 10:03:41 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:42.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:42 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:43 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:44.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:44 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:44.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:45 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:45 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:46.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:03:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:03:46 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:46.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:03:47
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images']
Jan 22 10:03:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:47.639 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:47.640 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:03:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:03:47.640 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:03:47 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:03:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:03:48 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:49.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:50 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:50 np0005592157 podman[326254]: 2026-01-22 15:03:50.349686827 +0000 UTC m=+0.069612170 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:03:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:51 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:51 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:52 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:53 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:54 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:54.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:55.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:55 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:55 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:56 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:57.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:57 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:58.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:58 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:03:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:03:59 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:00.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:00 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:00 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:00 np0005592157 podman[326329]: 2026-01-22 15:04:00.367114997 +0000 UTC m=+0.099480867 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:04:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:01 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:02 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:03 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:04.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:04 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001802996463800484 of space, bias 1.0, pg target 0.5336869532849433 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:04:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:04:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b75baf5f-5b2a-456f-9d82-6bac39107c89 does not exist
Jan 22 10:04:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0a5d2916-490f-43c0-b709-878de4ef2625 does not exist
Jan 22 10:04:05 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9a15a30e-c258-4b45-a167-72067fa5e875 does not exist
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:04:05 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:04:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.25557692 +0000 UTC m=+0.059846008 container create 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:04:06 np0005592157 systemd[1]: Started libpod-conmon-488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62.scope.
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.222171796 +0000 UTC m=+0.026440944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.371295537 +0000 UTC m=+0.175564595 container init 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.380368081 +0000 UTC m=+0.184637139 container start 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.384407501 +0000 UTC m=+0.188676559 container attach 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:04:06 np0005592157 modest_rubin[326763]: 167 167
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.389858966 +0000 UTC m=+0.194128024 container died 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:04:06 np0005592157 systemd[1]: libpod-488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62.scope: Deactivated successfully.
Jan 22 10:04:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-745c7480bc87cd6f808782d19630ed28d61060ffdaaa7ef99023135449e159ff-merged.mount: Deactivated successfully.
Jan 22 10:04:06 np0005592157 podman[326747]: 2026-01-22 15:04:06.43702398 +0000 UTC m=+0.241293038 container remove 488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:04:06 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:06 np0005592157 systemd[1]: libpod-conmon-488b30518a49d643c402e48bd2397f39c8f57233595d6a9cbc4f636797b6cc62.scope: Deactivated successfully.
Jan 22 10:04:06 np0005592157 podman[326786]: 2026-01-22 15:04:06.6667073 +0000 UTC m=+0.064093063 container create 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:04:06 np0005592157 systemd[1]: Started libpod-conmon-8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f.scope.
Jan 22 10:04:06 np0005592157 podman[326786]: 2026-01-22 15:04:06.638283469 +0000 UTC m=+0.035669322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:06 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:06 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:06 np0005592157 podman[326786]: 2026-01-22 15:04:06.765149551 +0000 UTC m=+0.162535344 container init 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:04:06 np0005592157 podman[326786]: 2026-01-22 15:04:06.781751441 +0000 UTC m=+0.179137204 container start 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:04:06 np0005592157 podman[326786]: 2026-01-22 15:04:06.813661348 +0000 UTC m=+0.211047141 container attach 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:04:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:07 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:07 np0005592157 competent_poitras[326803]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:04:07 np0005592157 competent_poitras[326803]: --> relative data size: 1.0
Jan 22 10:04:07 np0005592157 competent_poitras[326803]: --> All data devices are unavailable
Jan 22 10:04:07 np0005592157 systemd[1]: libpod-8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f.scope: Deactivated successfully.
Jan 22 10:04:07 np0005592157 podman[326786]: 2026-01-22 15:04:07.757788506 +0000 UTC m=+1.155174289 container died 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:04:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e35a247b80d8d6a0a6dd0160a78a54b599c67b413ff62ac15243e8b3c4e23b6b-merged.mount: Deactivated successfully.
Jan 22 10:04:07 np0005592157 podman[326786]: 2026-01-22 15:04:07.822184606 +0000 UTC m=+1.219570359 container remove 8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:04:07 np0005592157 systemd[1]: libpod-conmon-8e9543078129086b55d1c585ca82782ff16ec111fa13858faa207a96f03a3f6f.scope: Deactivated successfully.
Jan 22 10:04:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:08 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.594562494 +0000 UTC m=+0.047147125 container create ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:04:08 np0005592157 systemd[1]: Started libpod-conmon-ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430.scope.
Jan 22 10:04:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.576726584 +0000 UTC m=+0.029311205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.673235637 +0000 UTC m=+0.125820298 container init ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.680880325 +0000 UTC m=+0.133464966 container start ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.684389732 +0000 UTC m=+0.136974393 container attach ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 10:04:08 np0005592157 peaceful_goldstine[326986]: 167 167
Jan 22 10:04:08 np0005592157 systemd[1]: libpod-ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430.scope: Deactivated successfully.
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.6875289 +0000 UTC m=+0.140113521 container died ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:04:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-eea960f12019c8803cd1ce3fc49a1f7aed52d5f4711728f8e8e23d26e343cc8b-merged.mount: Deactivated successfully.
Jan 22 10:04:08 np0005592157 podman[326969]: 2026-01-22 15:04:08.718770201 +0000 UTC m=+0.171354822 container remove ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:04:08 np0005592157 systemd[1]: libpod-conmon-ae19adea9f0d1cd3f31182a0e5e6265c68cd719afaaf5dfbeb8933cdbfafa430.scope: Deactivated successfully.
Jan 22 10:04:08 np0005592157 podman[327011]: 2026-01-22 15:04:08.947509888 +0000 UTC m=+0.059090640 container create 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:04:08 np0005592157 systemd[1]: Started libpod-conmon-19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5.scope.
Jan 22 10:04:09 np0005592157 podman[327011]: 2026-01-22 15:04:08.91678988 +0000 UTC m=+0.028370672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869bd20a643c23b34efe0676719154558c887ced7aea76d8d4f448385004e974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869bd20a643c23b34efe0676719154558c887ced7aea76d8d4f448385004e974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869bd20a643c23b34efe0676719154558c887ced7aea76d8d4f448385004e974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869bd20a643c23b34efe0676719154558c887ced7aea76d8d4f448385004e974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:09 np0005592157 podman[327011]: 2026-01-22 15:04:09.066603818 +0000 UTC m=+0.178184620 container init 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:04:09 np0005592157 podman[327011]: 2026-01-22 15:04:09.079714742 +0000 UTC m=+0.191295494 container start 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:04:09 np0005592157 podman[327011]: 2026-01-22 15:04:09.083854374 +0000 UTC m=+0.195435116 container attach 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:04:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:09 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]: {
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:    "0": [
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:        {
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "devices": [
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "/dev/loop3"
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            ],
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "lv_name": "ceph_lv0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "lv_size": "7511998464",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "name": "ceph_lv0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "tags": {
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.cluster_name": "ceph",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.crush_device_class": "",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.encrypted": "0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.osd_id": "0",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.type": "block",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:                "ceph.vdo": "0"
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            },
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "type": "block",
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:            "vg_name": "ceph_vg0"
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:        }
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]:    ]
Jan 22 10:04:09 np0005592157 thirsty_nobel[327028]: }
Jan 22 10:04:09 np0005592157 systemd[1]: libpod-19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5.scope: Deactivated successfully.
Jan 22 10:04:09 np0005592157 podman[327038]: 2026-01-22 15:04:09.921289219 +0000 UTC m=+0.029304915 container died 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:04:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-869bd20a643c23b34efe0676719154558c887ced7aea76d8d4f448385004e974-merged.mount: Deactivated successfully.
Jan 22 10:04:09 np0005592157 podman[327038]: 2026-01-22 15:04:09.97034511 +0000 UTC m=+0.078360796 container remove 19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:04:09 np0005592157 systemd[1]: libpod-conmon-19d279a80e7bc8c395b129840a722e8b4757edbfc085ef1588dae599b225a5c5.scope: Deactivated successfully.
Jan 22 10:04:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:10 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:10 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.716733797 +0000 UTC m=+0.043421783 container create 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:04:10 np0005592157 systemd[1]: Started libpod-conmon-03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336.scope.
Jan 22 10:04:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.698909127 +0000 UTC m=+0.025597103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.812132942 +0000 UTC m=+0.138820978 container init 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.818503059 +0000 UTC m=+0.145191005 container start 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.823124443 +0000 UTC m=+0.149812429 container attach 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:04:10 np0005592157 focused_newton[327209]: 167 167
Jan 22 10:04:10 np0005592157 systemd[1]: libpod-03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336.scope: Deactivated successfully.
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.824771144 +0000 UTC m=+0.151459100 container died 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:04:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-75791551fb711c0c08eb8c3a03a67945b6815016dffb7453c7abe19a0f5992da-merged.mount: Deactivated successfully.
Jan 22 10:04:10 np0005592157 podman[327193]: 2026-01-22 15:04:10.86186007 +0000 UTC m=+0.188548026 container remove 03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:04:10 np0005592157 systemd[1]: libpod-conmon-03ed5648c95686a2f41ee8e24c1d2bad2246e98018c4fbffe3dc479bbe26e336.scope: Deactivated successfully.
Jan 22 10:04:11 np0005592157 podman[327233]: 2026-01-22 15:04:11.023952231 +0000 UTC m=+0.042065619 container create 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:04:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:11 np0005592157 systemd[1]: Started libpod-conmon-52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1.scope.
Jan 22 10:04:11 np0005592157 podman[327233]: 2026-01-22 15:04:11.004553613 +0000 UTC m=+0.022667021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:04:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:04:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4920977efad72a6524b4420c2b8357bc3c20aab656630f71eb785072a94d73a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4920977efad72a6524b4420c2b8357bc3c20aab656630f71eb785072a94d73a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4920977efad72a6524b4420c2b8357bc3c20aab656630f71eb785072a94d73a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4920977efad72a6524b4420c2b8357bc3c20aab656630f71eb785072a94d73a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:04:11 np0005592157 podman[327233]: 2026-01-22 15:04:11.138638723 +0000 UTC m=+0.156752201 container init 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:04:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:11 np0005592157 podman[327233]: 2026-01-22 15:04:11.148109327 +0000 UTC m=+0.166222735 container start 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:04:11 np0005592157 podman[327233]: 2026-01-22 15:04:11.151711736 +0000 UTC m=+0.169825194 container attach 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:04:11 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]: {
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:        "osd_id": 0,
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:        "type": "bluestore"
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]:    }
Jan 22 10:04:11 np0005592157 nervous_swartz[327250]: }
Jan 22 10:04:12 np0005592157 systemd[1]: libpod-52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1.scope: Deactivated successfully.
Jan 22 10:04:12 np0005592157 podman[327272]: 2026-01-22 15:04:12.077163562 +0000 UTC m=+0.038251525 container died 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:04:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e4920977efad72a6524b4420c2b8357bc3c20aab656630f71eb785072a94d73a-merged.mount: Deactivated successfully.
Jan 22 10:04:12 np0005592157 podman[327272]: 2026-01-22 15:04:12.128083519 +0000 UTC m=+0.089171412 container remove 52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_swartz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 10:04:12 np0005592157 systemd[1]: libpod-conmon-52647c5bc9a442ee9a2ed8835e03d74d121a5ed5da16e83670a9b8629f225ef1.scope: Deactivated successfully.
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1aee39df-6db4-4343-b784-d87a2f0e0e6e does not exist
Jan 22 10:04:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 024cea40-3409-485f-bebc-816fbeb47ed8 does not exist
Jan 22 10:04:12 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9c2bbb27-b18d-47dd-9e77-abde616e1193 does not exist
Jan 22 10:04:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:12.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:13 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:14 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:15 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:16 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:17 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:04:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:04:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:04:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:04:18 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 170 B/s wr, 1 op/s
Jan 22 10:04:19 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:20 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 10:04:21 np0005592157 podman[327391]: 2026-01-22 15:04:21.350627204 +0000 UTC m=+0.077878864 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 10:04:21 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:22 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 10:04:23 np0005592157 ceph-mon[74359]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:24 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 814 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 528 KiB/s wr, 33 op/s
Jan 22 10:04:25 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592157 ceph-mon[74359]: Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:26 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 10:04:27 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:28.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:28 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 10:04:29 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 10:04:31 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:31 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:31 np0005592157 podman[327415]: 2026-01-22 15:04:31.384920708 +0000 UTC m=+0.108209913 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:04:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:04:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 15K writes, 51K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 15K writes, 5037 syncs, 3.14 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1091 writes, 2353 keys, 1091 commit groups, 1.0 writes per commit group, ingest: 0.98 MB, 0.00 MB/s#012Interval WAL: 1091 writes, 503 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:04:32 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 10:04:33 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:34 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:35.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 10:04:35 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:35 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:36 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:37.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.3 MiB/s wr, 8 op/s
Jan 22 10:04:37 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:38 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:39.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:39 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:40 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:40 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:41.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:41 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:42.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:42 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:43.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:43 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:44.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:44 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 10:04:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:45 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:45 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:46 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:04:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:04:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:47 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:04:47
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', '.rgw.root']
Jan 22 10:04:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:04:47.641 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:04:47.641 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:04:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:04:47.642 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:04:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:48 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:49.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.511782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289512015, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1223, "num_deletes": 251, "total_data_size": 1575600, "memory_usage": 1603152, "flush_reason": "Manual Compaction"}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289533166, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1539104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81167, "largest_seqno": 82389, "table_properties": {"data_size": 1533722, "index_size": 2585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13910, "raw_average_key_size": 20, "raw_value_size": 1522007, "raw_average_value_size": 2275, "num_data_blocks": 111, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094210, "oldest_key_time": 1769094210, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 21415 microseconds, and 12480 cpu microseconds.
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.533286) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1539104 bytes OK
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.533333) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.537482) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.537535) EVENT_LOG_v1 {"time_micros": 1769094289537524, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.537566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1570004, prev total WAL file size 1570004, number of live WAL files 2.
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.538902) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1503KB)], [185(11MB)]
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289539070, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 13660988, "oldest_snapshot_seqno": -1}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 13720 keys, 11987375 bytes, temperature: kUnknown
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289643207, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11987375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11912631, "index_size": 39367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34309, "raw_key_size": 378447, "raw_average_key_size": 27, "raw_value_size": 11679132, "raw_average_value_size": 851, "num_data_blocks": 1424, "num_entries": 13720, "num_filter_entries": 13720, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.643873) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11987375 bytes
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.645590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.9 rd, 114.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 14237, records dropped: 517 output_compression: NoCompression
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.645625) EVENT_LOG_v1 {"time_micros": 1769094289645610, "job": 116, "event": "compaction_finished", "compaction_time_micros": 104339, "compaction_time_cpu_micros": 65702, "output_level": 6, "num_output_files": 1, "total_output_size": 11987375, "num_input_records": 14237, "num_output_records": 13720, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289646598, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289650654, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.538705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.650724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.650731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.650734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.650738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:04:49.650740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:50 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:50 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:51.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:51 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:52.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:52 np0005592157 podman[327503]: 2026-01-22 15:04:52.37154279 +0000 UTC m=+0.092175946 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 10:04:52 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:53.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:53 np0005592157 ceph-mon[74359]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:54.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:54 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:55 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592157 ceph-mon[74359]: Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:56.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:56 np0005592157 ceph-mon[74359]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:57.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:57 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:58.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:58 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:04:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:04:59 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:00.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:00 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592157 ceph-mon[74359]: Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:01 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:02.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:02 np0005592157 podman[327577]: 2026-01-22 15:05:02.370299448 +0000 UTC m=+0.093397517 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:05:02 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:03.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:03 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:04.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:04 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:05:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:05:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:05 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:06.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:06 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:07 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:08 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:09.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:09 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:10.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:10 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:11.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:12 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:12.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:13 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:13 np0005592157 podman[327780]: 2026-01-22 15:05:13.350151265 +0000 UTC m=+0.058257789 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:05:13 np0005592157 podman[327780]: 2026-01-22 15:05:13.454313096 +0000 UTC m=+0.162419580 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:05:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:05:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592157 podman[327935]: 2026-01-22 15:05:14.05232514 +0000 UTC m=+0.058279290 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:05:14 np0005592157 podman[327935]: 2026-01-22 15:05:14.059284952 +0000 UTC m=+0.065239102 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592157 podman[328002]: 2026-01-22 15:05:14.238163188 +0000 UTC m=+0.050130758 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, release=1793, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 10:05:14 np0005592157 podman[328002]: 2026-01-22 15:05:14.247253783 +0000 UTC m=+0.059221363 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, build-date=2023-02-22T09:23:20, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:05:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:15.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.568357328 +0000 UTC m=+0.037346403 container create 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:05:15 np0005592157 systemd[1]: Started libpod-conmon-7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0.scope.
Jan 22 10:05:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.552096387 +0000 UTC m=+0.021085492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.649551703 +0000 UTC m=+0.118540798 container init 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.656246958 +0000 UTC m=+0.125236033 container start 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.659767834 +0000 UTC m=+0.128756909 container attach 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:05:15 np0005592157 zealous_jones[328321]: 167 167
Jan 22 10:05:15 np0005592157 systemd[1]: libpod-7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0.scope: Deactivated successfully.
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.661721862 +0000 UTC m=+0.130710947 container died 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 10:05:15 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dbbd9243986afc3cabac873e6b6a6281b36ad65248f388a2cb76232e2e06c672-merged.mount: Deactivated successfully.
Jan 22 10:05:15 np0005592157 podman[328304]: 2026-01-22 15:05:15.706458727 +0000 UTC m=+0.175447802 container remove 7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:05:15 np0005592157 systemd[1]: libpod-conmon-7a0dc16a2e53f69dee5f9dca0ca05cd982c87fd9a3d7bf346d163506cba8b2f0.scope: Deactivated successfully.
Jan 22 10:05:15 np0005592157 podman[328347]: 2026-01-22 15:05:15.858313356 +0000 UTC m=+0.044375907 container create ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:15 np0005592157 systemd[1]: Started libpod-conmon-ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44.scope.
Jan 22 10:05:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:15 np0005592157 podman[328347]: 2026-01-22 15:05:15.840510796 +0000 UTC m=+0.026573437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7387b25d2e56ae07bfc8cdb47ff3bd10bd4838b1edd3b440c8f7a65760549d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7387b25d2e56ae07bfc8cdb47ff3bd10bd4838b1edd3b440c8f7a65760549d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7387b25d2e56ae07bfc8cdb47ff3bd10bd4838b1edd3b440c8f7a65760549d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7387b25d2e56ae07bfc8cdb47ff3bd10bd4838b1edd3b440c8f7a65760549d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:15 np0005592157 podman[328347]: 2026-01-22 15:05:15.950704777 +0000 UTC m=+0.136767368 container init ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:05:15 np0005592157 podman[328347]: 2026-01-22 15:05:15.959675198 +0000 UTC m=+0.145737759 container start ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:15 np0005592157 podman[328347]: 2026-01-22 15:05:15.963537533 +0000 UTC m=+0.149600094 container attach ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:16.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]: [
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:    {
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "available": false,
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "ceph_device": false,
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "lsm_data": {},
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "lvs": [],
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "path": "/dev/sr0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "rejected_reasons": [
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "Has a FileSystem",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "Insufficient space (<5GB)"
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        ],
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        "sys_api": {
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "actuators": null,
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "device_nodes": "sr0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "devname": "sr0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "human_readable_size": "482.00 KB",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "id_bus": "ata",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "model": "QEMU DVD-ROM",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "nr_requests": "2",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "parent": "/dev/sr0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "partitions": {},
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "path": "/dev/sr0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "removable": "1",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "rev": "2.5+",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "ro": "0",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "rotational": "1",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "sas_address": "",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "sas_device_handle": "",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "scheduler_mode": "mq-deadline",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "sectors": 0,
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "sectorsize": "2048",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "size": 493568.0,
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "support_discard": "2048",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "type": "disk",
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:            "vendor": "QEMU"
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:        }
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]:    }
Jan 22 10:05:17 np0005592157 pensive_chebyshev[328364]: ]
Jan 22 10:05:17 np0005592157 systemd[1]: libpod-ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44.scope: Deactivated successfully.
Jan 22 10:05:17 np0005592157 podman[328347]: 2026-01-22 15:05:17.107895775 +0000 UTC m=+1.293958336 container died ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:17 np0005592157 systemd[1]: libpod-ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44.scope: Consumed 1.166s CPU time.
Jan 22 10:05:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:17.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7387b25d2e56ae07bfc8cdb47ff3bd10bd4838b1edd3b440c8f7a65760549d73-merged.mount: Deactivated successfully.
Jan 22 10:05:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:17 np0005592157 podman[328347]: 2026-01-22 15:05:17.181019371 +0000 UTC m=+1.367081922 container remove ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:05:17 np0005592157 systemd[1]: libpod-conmon-ebaaa910aab9bb1d1f3d890792d600acca4edef0e8a0c311ea8fd90047c80e44.scope: Deactivated successfully.
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3ac6c523-99b0-4cf9-87bd-3b75fc178eb5 does not exist
Jan 22 10:05:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9f496ab3-c4a6-4810-bf27-6bc245f33e7f does not exist
Jan 22 10:05:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7ba7a822-9fb4-40f9-be83-d227db84fa0a does not exist
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:05:17 np0005592157 podman[329593]: 2026-01-22 15:05:17.972714036 +0000 UTC m=+0.054499466 container create cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:05:18 np0005592157 systemd[1]: Started libpod-conmon-cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0.scope.
Jan 22 10:05:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:17.957593483 +0000 UTC m=+0.039378933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:18.069814823 +0000 UTC m=+0.151600423 container init cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:18.077717839 +0000 UTC m=+0.159503269 container start cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:18.081059631 +0000 UTC m=+0.162845061 container attach cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:18 np0005592157 charming_shockley[329609]: 167 167
Jan 22 10:05:18 np0005592157 systemd[1]: libpod-cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0.scope: Deactivated successfully.
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:18.08343893 +0000 UTC m=+0.165224360 container died cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 10:05:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d89b3c4ff253f20e609f29523c29d8c6cca39e1e48d29353200f399c652a3ae9-merged.mount: Deactivated successfully.
Jan 22 10:05:18 np0005592157 podman[329593]: 2026-01-22 15:05:18.121546621 +0000 UTC m=+0.203332051 container remove cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:05:18 np0005592157 systemd[1]: libpod-conmon-cd094f89ffb6368befef941ba5738a8654a06ba300b84f03e0da4f2d6e0c5cf0.scope: Deactivated successfully.
Jan 22 10:05:18 np0005592157 podman[329633]: 2026-01-22 15:05:18.322327507 +0000 UTC m=+0.044004327 container create 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:18 np0005592157 systemd[1]: Started libpod-conmon-24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed.scope.
Jan 22 10:05:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:18 np0005592157 podman[329633]: 2026-01-22 15:05:18.301402711 +0000 UTC m=+0.023079581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:18 np0005592157 podman[329633]: 2026-01-22 15:05:18.401478541 +0000 UTC m=+0.123155401 container init 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 10:05:18 np0005592157 podman[329633]: 2026-01-22 15:05:18.418069071 +0000 UTC m=+0.139745931 container start 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:05:18 np0005592157 podman[329633]: 2026-01-22 15:05:18.422422179 +0000 UTC m=+0.144099039 container attach 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 22 10:05:18 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:19 np0005592157 kind_perlman[329650]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:05:19 np0005592157 kind_perlman[329650]: --> relative data size: 1.0
Jan 22 10:05:19 np0005592157 kind_perlman[329650]: --> All data devices are unavailable
Jan 22 10:05:19 np0005592157 systemd[1]: libpod-24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed.scope: Deactivated successfully.
Jan 22 10:05:19 np0005592157 podman[329669]: 2026-01-22 15:05:19.256861168 +0000 UTC m=+0.028615976 container died 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:05:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2a77f4690cd967aedf4d644d2ef80409fdd966a7d6ae047504d73eb2ec23a2ad-merged.mount: Deactivated successfully.
Jan 22 10:05:19 np0005592157 podman[329669]: 2026-01-22 15:05:19.309413876 +0000 UTC m=+0.081168674 container remove 24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:05:19 np0005592157 systemd[1]: libpod-conmon-24fef6bffdf1aa910b841968280a3b1bf644d09beaf82c1c585467c73e2a0fed.scope: Deactivated successfully.
Jan 22 10:05:19 np0005592157 podman[329872]: 2026-01-22 15:05:19.96011152 +0000 UTC m=+0.063108989 container create 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:05:19 np0005592157 systemd[1]: Started libpod-conmon-3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9.scope.
Jan 22 10:05:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:19.936846906 +0000 UTC m=+0.039844485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:20.044236157 +0000 UTC m=+0.147233676 container init 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:20.057870164 +0000 UTC m=+0.160867683 container start 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:20.06175314 +0000 UTC m=+0.164750629 container attach 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:05:20 np0005592157 priceless_blackburn[329888]: 167 167
Jan 22 10:05:20 np0005592157 systemd[1]: libpod-3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9.scope: Deactivated successfully.
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:20.066826485 +0000 UTC m=+0.169823994 container died 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:05:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8a107d17d590166989af247b8a807e0d9071caf3b51a4f477dfdd3836ac53625-merged.mount: Deactivated successfully.
Jan 22 10:05:20 np0005592157 podman[329872]: 2026-01-22 15:05:20.115353683 +0000 UTC m=+0.218351192 container remove 3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:05:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:20 np0005592157 systemd[1]: libpod-conmon-3085be72a0e19616df961f25cab3b9201ccbad68584ed0ce6b0f5599831c56f9.scope: Deactivated successfully.
Jan 22 10:05:20 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:20 np0005592157 podman[329912]: 2026-01-22 15:05:20.328734931 +0000 UTC m=+0.054976138 container create cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 10:05:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:20 np0005592157 systemd[1]: Started libpod-conmon-cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0.scope.
Jan 22 10:05:20 np0005592157 podman[329912]: 2026-01-22 15:05:20.304085292 +0000 UTC m=+0.030326549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f06a800a1487116da7fb22d530b59dd9e0fe348db1a404fe7468fd47e704a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f06a800a1487116da7fb22d530b59dd9e0fe348db1a404fe7468fd47e704a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f06a800a1487116da7fb22d530b59dd9e0fe348db1a404fe7468fd47e704a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f06a800a1487116da7fb22d530b59dd9e0fe348db1a404fe7468fd47e704a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:20 np0005592157 podman[329912]: 2026-01-22 15:05:20.427820767 +0000 UTC m=+0.154061984 container init cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:05:20 np0005592157 podman[329912]: 2026-01-22 15:05:20.43402311 +0000 UTC m=+0.160264287 container start cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:20 np0005592157 podman[329912]: 2026-01-22 15:05:20.442425428 +0000 UTC m=+0.168666615 container attach cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:21.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:21 np0005592157 elated_sammet[329928]: {
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:    "0": [
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:        {
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "devices": [
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "/dev/loop3"
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            ],
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "lv_name": "ceph_lv0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "lv_size": "7511998464",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "name": "ceph_lv0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "tags": {
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.cluster_name": "ceph",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.crush_device_class": "",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.encrypted": "0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.osd_id": "0",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.type": "block",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:                "ceph.vdo": "0"
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            },
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "type": "block",
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:            "vg_name": "ceph_vg0"
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:        }
Jan 22 10:05:21 np0005592157 elated_sammet[329928]:    ]
Jan 22 10:05:21 np0005592157 elated_sammet[329928]: }
Jan 22 10:05:21 np0005592157 systemd[1]: libpod-cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0.scope: Deactivated successfully.
Jan 22 10:05:21 np0005592157 podman[329912]: 2026-01-22 15:05:21.262281218 +0000 UTC m=+0.988522385 container died cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:05:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-52f06a800a1487116da7fb22d530b59dd9e0fe348db1a404fe7468fd47e704a0-merged.mount: Deactivated successfully.
Jan 22 10:05:21 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:21 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592157 podman[329912]: 2026-01-22 15:05:21.323767826 +0000 UTC m=+1.050008993 container remove cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:05:21 np0005592157 systemd[1]: libpod-conmon-cca1a843be102a9fbfda98a7fdfc48c181763a88444370570478971fcd5c46c0.scope: Deactivated successfully.
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.122920136 +0000 UTC m=+0.053142423 container create 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:05:22 np0005592157 systemd[1]: Started libpod-conmon-002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64.scope.
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.096501744 +0000 UTC m=+0.026724021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.223255303 +0000 UTC m=+0.153477580 container init 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.235953216 +0000 UTC m=+0.166175473 container start 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.239788681 +0000 UTC m=+0.170010948 container attach 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:05:22 np0005592157 cranky_roentgen[330104]: 167 167
Jan 22 10:05:22 np0005592157 systemd[1]: libpod-002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64.scope: Deactivated successfully.
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.24500773 +0000 UTC m=+0.175229987 container died 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:05:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a5b82d571ce9f544c88800a26e9d90e0c9f434b7d96decc68ce10a222e3d38db-merged.mount: Deactivated successfully.
Jan 22 10:05:22 np0005592157 podman[330088]: 2026-01-22 15:05:22.289836117 +0000 UTC m=+0.220058374 container remove 002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 10:05:22 np0005592157 systemd[1]: libpod-conmon-002082aeaed47c8b0256fc4f6e478d76bb2bd089f16368c82c0f7adb4691eb64.scope: Deactivated successfully.
Jan 22 10:05:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:22 np0005592157 podman[330129]: 2026-01-22 15:05:22.486954413 +0000 UTC m=+0.054074376 container create d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:05:22 np0005592157 systemd[1]: Started libpod-conmon-d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b.scope.
Jan 22 10:05:22 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:22 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:05:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74495ff68b3824254772986c4ff3d68b994d5a0bdae4b4b2017233fe8b7adc4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74495ff68b3824254772986c4ff3d68b994d5a0bdae4b4b2017233fe8b7adc4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74495ff68b3824254772986c4ff3d68b994d5a0bdae4b4b2017233fe8b7adc4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:22 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74495ff68b3824254772986c4ff3d68b994d5a0bdae4b4b2017233fe8b7adc4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:22 np0005592157 podman[330129]: 2026-01-22 15:05:22.465392631 +0000 UTC m=+0.032512624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:22 np0005592157 podman[330129]: 2026-01-22 15:05:22.572559007 +0000 UTC m=+0.139679040 container init d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:05:22 np0005592157 podman[330129]: 2026-01-22 15:05:22.580273697 +0000 UTC m=+0.147393650 container start d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:05:22 np0005592157 podman[330129]: 2026-01-22 15:05:22.587182308 +0000 UTC m=+0.154302351 container attach d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:05:22 np0005592157 podman[330143]: 2026-01-22 15:05:22.606831093 +0000 UTC m=+0.078085309 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:05:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]: {
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:        "osd_id": 0,
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:        "type": "bluestore"
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]:    }
Jan 22 10:05:23 np0005592157 goofy_kapitsa[330147]: }
Jan 22 10:05:23 np0005592157 systemd[1]: libpod-d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b.scope: Deactivated successfully.
Jan 22 10:05:23 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:23 np0005592157 podman[330184]: 2026-01-22 15:05:23.571339944 +0000 UTC m=+0.045154766 container died d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-74495ff68b3824254772986c4ff3d68b994d5a0bdae4b4b2017233fe8b7adc4e-merged.mount: Deactivated successfully.
Jan 22 10:05:23 np0005592157 podman[330184]: 2026-01-22 15:05:23.63113868 +0000 UTC m=+0.104953472 container remove d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:05:23 np0005592157 systemd[1]: libpod-conmon-d35b312f67611d10e602f1d272576b03517bb52acd44ed94d1ca2989cc0d741b.scope: Deactivated successfully.
Jan 22 10:05:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:05:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:05:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 20e5cc03-5982-4af0-b25a-d381476ae5ad does not exist
Jan 22 10:05:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 56c32e95-7e0e-46d5-8dd1-c2ec0a7d777a does not exist
Jan 22 10:05:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 54efea14-4588-4ae7-81a7-345b3843efb5 does not exist
Jan 22 10:05:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:25.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:25 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:26.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:26 np0005592157 ceph-mon[74359]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:27 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:28 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:29.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:29 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:30 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592157 ceph-mon[74359]: Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:31 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:32 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:33.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:33 np0005592157 podman[330252]: 2026-01-22 15:05:33.383708041 +0000 UTC m=+0.122527146 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:05:34 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:35 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:37 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:38 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:39 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:40 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:40 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:42 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:44 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:45 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:46.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:46 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:05:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:05:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:05:47
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.log', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Jan 22 10:05:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:05:47 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:05:47.642 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:05:47.642 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:05:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:05:47.642 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:05:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:48.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:48 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:49.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:49 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:50 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:51 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:52.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:05:53 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:53 np0005592157 podman[330338]: 2026-01-22 15:05:53.314139049 +0000 UTC m=+0.055910121 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 10:05:54 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:54.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:55 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:56 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:56 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:57 np0005592157 ceph-mon[74359]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:58 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:05:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:05:59 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:05:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:05:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:05:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:06:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:00 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592157 ceph-mon[74359]: Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:00.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:01 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:02.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:03 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:04 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:04 np0005592157 podman[330413]: 2026-01-22 15:06:04.356734057 +0000 UTC m=+0.089734756 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 22 10:06:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:04.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:06:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:06:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:05 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:06.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:06 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:07.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:07 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:08.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:08 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:09.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:09 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:10.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:10 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:11 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:12 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:13.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:13 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:14 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:06:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:15.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:06:15 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:06:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:16.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:16 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:17 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:18.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:18 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.185332) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380185400, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1341, "num_deletes": 250, "total_data_size": 1848722, "memory_usage": 1881208, "flush_reason": "Manual Compaction"}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380194220, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1191330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82390, "largest_seqno": 83730, "table_properties": {"data_size": 1186443, "index_size": 2090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14800, "raw_average_key_size": 21, "raw_value_size": 1174975, "raw_average_value_size": 1727, "num_data_blocks": 89, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094290, "oldest_key_time": 1769094290, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 8940 microseconds, and 4028 cpu microseconds.
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.194282) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1191330 bytes OK
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.194302) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.195620) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.195631) EVENT_LOG_v1 {"time_micros": 1769094380195627, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.195645) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1842685, prev total WAL file size 1842685, number of live WAL files 2.
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.196246) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1163KB)], [188(11MB)]
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380196328, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 13178705, "oldest_snapshot_seqno": -1}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 13922 keys, 9876499 bytes, temperature: kUnknown
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380273394, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 9876499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804317, "index_size": 36293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34821, "raw_key_size": 383383, "raw_average_key_size": 27, "raw_value_size": 9571024, "raw_average_value_size": 687, "num_data_blocks": 1296, "num_entries": 13922, "num_filter_entries": 13922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.273621) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9876499 bytes
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.275298) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.9 rd, 128.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.3) OK, records in: 14400, records dropped: 478 output_compression: NoCompression
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.275315) EVENT_LOG_v1 {"time_micros": 1769094380275307, "job": 118, "event": "compaction_finished", "compaction_time_micros": 77125, "compaction_time_cpu_micros": 26813, "output_level": 6, "num_output_files": 1, "total_output_size": 9876499, "num_input_records": 14400, "num_output_records": 13922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380275571, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380277399, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.196164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.277502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.277511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.277512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.277514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:06:20.277516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:21 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:21 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:21.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:22 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:23.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:24 np0005592157 podman[330503]: 2026-01-22 15:06:24.321207416 +0000 UTC m=+0.050516608 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:06:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:24 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:24 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:25.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 399c7a66-7549-4753-8df0-1e460529b4b4 does not exist
Jan 22 10:06:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b5aada9c-63dd-4338-b58a-4f4a44e21037 does not exist
Jan 22 10:06:26 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 885ff9ae-5df3-48d8-bee7-c90dc57b6575 does not exist
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:06:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.774417711 +0000 UTC m=+0.038818139 container create 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:06:26 np0005592157 systemd[1]: Started libpod-conmon-39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0.scope.
Jan 22 10:06:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.849060464 +0000 UTC m=+0.113460912 container init 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.756433977 +0000 UTC m=+0.020834425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.855173545 +0000 UTC m=+0.119573973 container start 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.858855666 +0000 UTC m=+0.123256094 container attach 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:06:26 np0005592157 affectionate_maxwell[330804]: 167 167
Jan 22 10:06:26 np0005592157 systemd[1]: libpod-39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0.scope: Deactivated successfully.
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.861633594 +0000 UTC m=+0.126034032 container died 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:06:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8af0e9b54529c9e7d0fcd97d95ef24db279b762465e35166317a37c09bc08a19-merged.mount: Deactivated successfully.
Jan 22 10:06:26 np0005592157 podman[330788]: 2026-01-22 15:06:26.898809272 +0000 UTC m=+0.163209700 container remove 39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_maxwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 10:06:26 np0005592157 systemd[1]: libpod-conmon-39eb0787e43713a5112bc88dd32ee88a7f125f9f22e93554596aff648fb625e0.scope: Deactivated successfully.
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.061064108 +0000 UTC m=+0.038609624 container create fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:06:27 np0005592157 systemd[1]: Started libpod-conmon-fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08.scope.
Jan 22 10:06:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.124371871 +0000 UTC m=+0.101917417 container init fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.133418064 +0000 UTC m=+0.110963580 container start fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.137342381 +0000 UTC m=+0.114887917 container attach fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.044827767 +0000 UTC m=+0.022373303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:27.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:27 np0005592157 compassionate_morse[330844]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:06:27 np0005592157 compassionate_morse[330844]: --> relative data size: 1.0
Jan 22 10:06:27 np0005592157 compassionate_morse[330844]: --> All data devices are unavailable
Jan 22 10:06:27 np0005592157 systemd[1]: libpod-fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08.scope: Deactivated successfully.
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.931735212 +0000 UTC m=+0.909280728 container died fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:06:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b72c9cddcc0f0b9ad4b1f74ab3ee4cb6c797675a12483b3a7d60a961059703bb-merged.mount: Deactivated successfully.
Jan 22 10:06:27 np0005592157 podman[330828]: 2026-01-22 15:06:27.9835088 +0000 UTC m=+0.961054316 container remove fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_morse, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:06:27 np0005592157 systemd[1]: libpod-conmon-fd1652d999f7adbc660479bb07bb3f289570d63fd2b967aa81532d2d2a1dee08.scope: Deactivated successfully.
Jan 22 10:06:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:28.233 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:06:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:28.236 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:06:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:28 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.681477732 +0000 UTC m=+0.070585774 container create b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:06:28 np0005592157 systemd[1]: Started libpod-conmon-b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421.scope.
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.652787663 +0000 UTC m=+0.041895745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.784079085 +0000 UTC m=+0.173187087 container init b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.794712727 +0000 UTC m=+0.183820759 container start b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.799087805 +0000 UTC m=+0.188195827 container attach b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:06:28 np0005592157 nice_bhaskara[331031]: 167 167
Jan 22 10:06:28 np0005592157 systemd[1]: libpod-b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421.scope: Deactivated successfully.
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.802641603 +0000 UTC m=+0.191749645 container died b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:06:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-96a311f048e9dedab62831c769683fc1c4217e362a1f60dba43b5bbae4919634-merged.mount: Deactivated successfully.
Jan 22 10:06:28 np0005592157 podman[331015]: 2026-01-22 15:06:28.849646994 +0000 UTC m=+0.238755006 container remove b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:06:28 np0005592157 systemd[1]: libpod-conmon-b678b8c55c174d8ea6c63fd08b2e7f0eded1428ec9a404de00d3041510046421.scope: Deactivated successfully.
Jan 22 10:06:29 np0005592157 podman[331054]: 2026-01-22 15:06:29.021594529 +0000 UTC m=+0.047715339 container create ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:06:29 np0005592157 systemd[1]: Started libpod-conmon-ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8.scope.
Jan 22 10:06:29 np0005592157 podman[331054]: 2026-01-22 15:06:28.999257207 +0000 UTC m=+0.025378087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f404a8e5a68f98a7f22793ad32c8a58bedae6038419e956475c0c48927514d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f404a8e5a68f98a7f22793ad32c8a58bedae6038419e956475c0c48927514d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f404a8e5a68f98a7f22793ad32c8a58bedae6038419e956475c0c48927514d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f404a8e5a68f98a7f22793ad32c8a58bedae6038419e956475c0c48927514d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:29 np0005592157 podman[331054]: 2026-01-22 15:06:29.127257037 +0000 UTC m=+0.153377907 container init ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:06:29 np0005592157 podman[331054]: 2026-01-22 15:06:29.139162541 +0000 UTC m=+0.165283371 container start ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 10:06:29 np0005592157 podman[331054]: 2026-01-22 15:06:29.143417506 +0000 UTC m=+0.169538336 container attach ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:06:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:29.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:29 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]: {
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:    "0": [
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:        {
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "devices": [
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "/dev/loop3"
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            ],
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "lv_name": "ceph_lv0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "lv_size": "7511998464",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "name": "ceph_lv0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "tags": {
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.cluster_name": "ceph",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.crush_device_class": "",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.encrypted": "0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.osd_id": "0",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.type": "block",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:                "ceph.vdo": "0"
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            },
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "type": "block",
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:            "vg_name": "ceph_vg0"
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:        }
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]:    ]
Jan 22 10:06:29 np0005592157 recursing_mendeleev[331071]: }
Jan 22 10:06:29 np0005592157 systemd[1]: libpod-ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8.scope: Deactivated successfully.
Jan 22 10:06:29 np0005592157 conmon[331071]: conmon ddbee8d447c983960cf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8.scope/container/memory.events
Jan 22 10:06:30 np0005592157 podman[331082]: 2026-01-22 15:06:30.02132329 +0000 UTC m=+0.027604223 container died ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:06:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-19f404a8e5a68f98a7f22793ad32c8a58bedae6038419e956475c0c48927514d-merged.mount: Deactivated successfully.
Jan 22 10:06:30 np0005592157 podman[331082]: 2026-01-22 15:06:30.077559178 +0000 UTC m=+0.083840101 container remove ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:06:30 np0005592157 systemd[1]: libpod-conmon-ddbee8d447c983960cf5e33717485110a60b59ef8fa9c1f2fb721ce22ddc7fe8.scope: Deactivated successfully.
Jan 22 10:06:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:30 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:30 np0005592157 ceph-mon[74359]: Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.815772543 +0000 UTC m=+0.051942413 container create 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:06:30 np0005592157 systemd[1]: Started libpod-conmon-7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110.scope.
Jan 22 10:06:30 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.792484488 +0000 UTC m=+0.028654378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.892462925 +0000 UTC m=+0.128632785 container init 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.898876734 +0000 UTC m=+0.135046564 container start 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.901735314 +0000 UTC m=+0.137905194 container attach 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:30 np0005592157 modest_bohr[331252]: 167 167
Jan 22 10:06:30 np0005592157 systemd[1]: libpod-7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110.scope: Deactivated successfully.
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.904262967 +0000 UTC m=+0.140432807 container died 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d352a455da8ea3166b2d0f4e8660a3725bd776f02f621c8ea3096684768ffb1f-merged.mount: Deactivated successfully.
Jan 22 10:06:30 np0005592157 podman[331236]: 2026-01-22 15:06:30.93964947 +0000 UTC m=+0.175819310 container remove 7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:06:30 np0005592157 systemd[1]: libpod-conmon-7108cf94cc21eb98ee37072eff3e34c0a049983a21be4cc22f3964cee1bd2110.scope: Deactivated successfully.
Jan 22 10:06:31 np0005592157 podman[331277]: 2026-01-22 15:06:31.088428653 +0000 UTC m=+0.021308807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:06:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:31.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:31 np0005592157 podman[331277]: 2026-01-22 15:06:31.569802878 +0000 UTC m=+0.502683042 container create e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:06:31 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:31 np0005592157 systemd[1]: Started libpod-conmon-e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182.scope.
Jan 22 10:06:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:06:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6420481d368057039b44f97179f5899799bc73664b1d9d16efcc18fd1d5debb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6420481d368057039b44f97179f5899799bc73664b1d9d16efcc18fd1d5debb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6420481d368057039b44f97179f5899799bc73664b1d9d16efcc18fd1d5debb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6420481d368057039b44f97179f5899799bc73664b1d9d16efcc18fd1d5debb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:06:31 np0005592157 podman[331277]: 2026-01-22 15:06:31.70277007 +0000 UTC m=+0.635650314 container init e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:06:31 np0005592157 podman[331277]: 2026-01-22 15:06:31.711131787 +0000 UTC m=+0.644011931 container start e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:06:31 np0005592157 podman[331277]: 2026-01-22 15:06:31.714615523 +0000 UTC m=+0.647495667 container attach e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:06:32 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:32.238 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:06:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:32 np0005592157 serene_edison[331293]: {
Jan 22 10:06:32 np0005592157 serene_edison[331293]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:06:32 np0005592157 serene_edison[331293]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:06:32 np0005592157 serene_edison[331293]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:06:32 np0005592157 serene_edison[331293]:        "osd_id": 0,
Jan 22 10:06:32 np0005592157 serene_edison[331293]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:06:32 np0005592157 serene_edison[331293]:        "type": "bluestore"
Jan 22 10:06:32 np0005592157 serene_edison[331293]:    }
Jan 22 10:06:32 np0005592157 serene_edison[331293]: }
Jan 22 10:06:32 np0005592157 systemd[1]: libpod-e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182.scope: Deactivated successfully.
Jan 22 10:06:32 np0005592157 podman[331315]: 2026-01-22 15:06:32.599316094 +0000 UTC m=+0.040837999 container died e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:06:32 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6420481d368057039b44f97179f5899799bc73664b1d9d16efcc18fd1d5debb5-merged.mount: Deactivated successfully.
Jan 22 10:06:32 np0005592157 podman[331315]: 2026-01-22 15:06:32.747538873 +0000 UTC m=+0.189060698 container remove e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:32 np0005592157 systemd[1]: libpod-conmon-e0599634f447ad94959058f124f4b50e80dbb02d77ff3b84dcd558e6d66b3182.scope: Deactivated successfully.
Jan 22 10:06:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:06:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:06:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b5d59341-b37f-4153-8ef7-12430404aa45 does not exist
Jan 22 10:06:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9e766411-79e4-4e5b-a1a4-64815f8b0a4f does not exist
Jan 22 10:06:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3269274a-e7bd-4dcc-9e37-8b55c79aefab does not exist
Jan 22 10:06:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:33 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:34 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:35.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:35 np0005592157 podman[331379]: 2026-01-22 15:06:35.384537636 +0000 UTC m=+0.114405386 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:06:35 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592157 ceph-mon[74359]: Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:37 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:37.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:38 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:38.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:39 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:39.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:40 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:40.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:41 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592157 ceph-mon[74359]: Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:41 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:42 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:43 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:43.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:44 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:45.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:45 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592157 ceph-mon[74359]: Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:46.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:06:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:06:46 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:47.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:06:47
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes']
Jan 22 10:06:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:47.642 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:47.643 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:06:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:06:47.643 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:06:47 np0005592157 ceph-mon[74359]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:06:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:48.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:06:48 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:49 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:50 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592157 ceph-mon[74359]: Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:51.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:52 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:53 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:06:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:53.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:06:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:54 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:54 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:55 np0005592157 podman[331465]: 2026-01-22 15:06:55.315902388 +0000 UTC m=+0.049707628 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:06:55 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:56 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:57 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:58 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:06:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:06:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:59.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:59 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:00 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 10:07:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:01.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:01 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:02 np0005592157 ceph-mon[74359]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:07:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 10:07:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:03 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:07:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:07:04 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:07:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:07:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 10:07:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:05.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:06 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:06 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:06 np0005592157 podman[331542]: 2026-01-22 15:07:06.386818296 +0000 UTC m=+0.114767365 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:07:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:07 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 10:07:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:07.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:08 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:09 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 10:07:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:10 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:10.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:11 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:11 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 10:07:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:12 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:12.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:13 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 10:07:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:13.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:14 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000074s ======
Jan 22 10:07:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Jan 22 10:07:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 10:07:15 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:15 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:15.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:16 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:17 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:17.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:18 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:19.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:19 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:20 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:20 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:21.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:21 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:23.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:23 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:24.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:24 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:07:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:07:25 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592157 ceph-mon[74359]: Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:26 np0005592157 podman[331629]: 2026-01-22 15:07:26.314263511 +0000 UTC m=+0.056758452 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:07:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:26.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:26 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:27.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:27 np0005592157 ceph-mon[74359]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:28 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:29.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:30 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:30.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:31 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592157 ceph-mon[74359]: Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:32.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 22 10:07:32 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 10:07:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 22 10:07:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:07:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.942428) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453942532, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1169, "num_deletes": 306, "total_data_size": 1464976, "memory_usage": 1489216, "flush_reason": "Manual Compaction"}
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453953681, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1441669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83731, "largest_seqno": 84899, "table_properties": {"data_size": 1436443, "index_size": 2429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14759, "raw_average_key_size": 21, "raw_value_size": 1424591, "raw_average_value_size": 2076, "num_data_blocks": 104, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 306, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094381, "oldest_key_time": 1769094381, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 11293 microseconds, and 4413 cpu microseconds.
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.953732) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1441669 bytes OK
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.953750) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956604) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956671) EVENT_LOG_v1 {"time_micros": 1769094453956659, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956702) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1459319, prev total WAL file size 1459319, number of live WAL files 2.
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957593) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1407KB)], [191(9645KB)]
Jan 22 10:07:33 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453957651, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 11318168, "oldest_snapshot_seqno": -1}
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 13977 keys, 9685277 bytes, temperature: kUnknown
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454025565, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9685277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9612762, "index_size": 36498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34949, "raw_key_size": 385274, "raw_average_key_size": 27, "raw_value_size": 9378597, "raw_average_value_size": 671, "num_data_blocks": 1301, "num_entries": 13977, "num_filter_entries": 13977, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025840) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9685277 bytes
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.027384) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 142.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 14608, records dropped: 631 output_compression: NoCompression
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.027400) EVENT_LOG_v1 {"time_micros": 1769094454027392, "job": 120, "event": "compaction_finished", "compaction_time_micros": 67989, "compaction_time_cpu_micros": 27762, "output_level": 6, "num_output_files": 1, "total_output_size": 9685277, "num_input_records": 14608, "num_output_records": 13977, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454027681, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454029179, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.029206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.029210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.029211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.029213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:07:34.029214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:07:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 10:07:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bd6843aa-d00a-4a03-b7a4-6c15f82caef0 does not exist
Jan 22 10:07:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fe64f0f4-168f-40c6-bef2-f8724ea52f38 does not exist
Jan 22 10:07:35 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 70115124-9c0f-4a24-9090-03aaad67e471 does not exist
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:07:35 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.193119009 +0000 UTC m=+0.061757475 container create bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:07:36 np0005592157 systemd[1]: Started libpod-conmon-bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1.scope.
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.171849894 +0000 UTC m=+0.040488400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.281596663 +0000 UTC m=+0.150235149 container init bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.288554945 +0000 UTC m=+0.157193391 container start bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.292877462 +0000 UTC m=+0.161515938 container attach bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 10:07:36 np0005592157 modest_jang[332064]: 167 167
Jan 22 10:07:36 np0005592157 systemd[1]: libpod-bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1.scope: Deactivated successfully.
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.294752278 +0000 UTC m=+0.163390734 container died bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:07:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-562705254df649e03a62c076ff48a3f93ba056ed1682b74947b09f92cf2d5ddb-merged.mount: Deactivated successfully.
Jan 22 10:07:36 np0005592157 podman[332047]: 2026-01-22 15:07:36.341343058 +0000 UTC m=+0.209981504 container remove bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:07:36 np0005592157 systemd[1]: libpod-conmon-bea50082581e84b8c7533f18d2267bf4a69a6d7e7a10feee2039d45c5c00d9b1.scope: Deactivated successfully.
Jan 22 10:07:36 np0005592157 podman[332088]: 2026-01-22 15:07:36.496592711 +0000 UTC m=+0.038257395 container create cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:07:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:36 np0005592157 systemd[1]: Started libpod-conmon-cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda.scope.
Jan 22 10:07:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:36 np0005592157 podman[332088]: 2026-01-22 15:07:36.479879609 +0000 UTC m=+0.021544313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:36 np0005592157 podman[332088]: 2026-01-22 15:07:36.580958714 +0000 UTC m=+0.122623418 container init cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:07:36 np0005592157 podman[332088]: 2026-01-22 15:07:36.589038204 +0000 UTC m=+0.130702878 container start cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:07:36 np0005592157 podman[332088]: 2026-01-22 15:07:36.594414796 +0000 UTC m=+0.136079510 container attach cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:07:36 np0005592157 podman[332102]: 2026-01-22 15:07:36.631720497 +0000 UTC m=+0.098070222 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 10:07:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 10:07:37 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:37 np0005592157 determined_nightingale[332110]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:07:37 np0005592157 determined_nightingale[332110]: --> relative data size: 1.0
Jan 22 10:07:37 np0005592157 determined_nightingale[332110]: --> All data devices are unavailable
Jan 22 10:07:37 np0005592157 systemd[1]: libpod-cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda.scope: Deactivated successfully.
Jan 22 10:07:37 np0005592157 podman[332088]: 2026-01-22 15:07:37.448054061 +0000 UTC m=+0.989718745 container died cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:07:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7c7ff788854c7577bda1d3155f06f2038f719aeb0af0f7379681f01da3990140-merged.mount: Deactivated successfully.
Jan 22 10:07:37 np0005592157 podman[332088]: 2026-01-22 15:07:37.704345808 +0000 UTC m=+1.246010492 container remove cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:07:37 np0005592157 systemd[1]: libpod-conmon-cf0c8d7c63fd50266991d28ea08daf133148a0ef2a3cd5f78fd2bfabc005dbda.scope: Deactivated successfully.
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.296438216 +0000 UTC m=+0.041949557 container create 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:07:38 np0005592157 systemd[1]: Started libpod-conmon-89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3.scope.
Jan 22 10:07:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.280838921 +0000 UTC m=+0.026350282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.38167798 +0000 UTC m=+0.127189341 container init 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.390278593 +0000 UTC m=+0.135789934 container start 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.393479612 +0000 UTC m=+0.138990963 container attach 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:07:38 np0005592157 adoring_sinoussi[332313]: 167 167
Jan 22 10:07:38 np0005592157 systemd[1]: libpod-89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3.scope: Deactivated successfully.
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.39502477 +0000 UTC m=+0.140536111 container died 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:07:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6da61a6d7808544ca2e47b59450feeef056f8c361059e49cb20e855fa116f096-merged.mount: Deactivated successfully.
Jan 22 10:07:38 np0005592157 podman[332297]: 2026-01-22 15:07:38.430938667 +0000 UTC m=+0.176450008 container remove 89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:07:38 np0005592157 systemd[1]: libpod-conmon-89c74b6b1022126cea97a4c2bc0e0be3710d0c0b3a1a683f226e3a6dd9cae5f3.scope: Deactivated successfully.
Jan 22 10:07:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:38.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:38 np0005592157 podman[332336]: 2026-01-22 15:07:38.617275407 +0000 UTC m=+0.040186693 container create 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:07:38 np0005592157 systemd[1]: Started libpod-conmon-6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3.scope.
Jan 22 10:07:38 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68282d05feb88fb18ef483868aa7ce9caa30108fb78998408c9cba05066b08f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68282d05feb88fb18ef483868aa7ce9caa30108fb78998408c9cba05066b08f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68282d05feb88fb18ef483868aa7ce9caa30108fb78998408c9cba05066b08f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68282d05feb88fb18ef483868aa7ce9caa30108fb78998408c9cba05066b08f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:38 np0005592157 podman[332336]: 2026-01-22 15:07:38.599495948 +0000 UTC m=+0.022407264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:38 np0005592157 podman[332336]: 2026-01-22 15:07:38.69761998 +0000 UTC m=+0.120531346 container init 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:07:38 np0005592157 podman[332336]: 2026-01-22 15:07:38.702680435 +0000 UTC m=+0.125591741 container start 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:07:38 np0005592157 podman[332336]: 2026-01-22 15:07:38.706573741 +0000 UTC m=+0.129485107 container attach 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:07:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 10:07:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:39.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:39 np0005592157 practical_moore[332352]: {
Jan 22 10:07:39 np0005592157 practical_moore[332352]:    "0": [
Jan 22 10:07:39 np0005592157 practical_moore[332352]:        {
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "devices": [
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "/dev/loop3"
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            ],
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "lv_name": "ceph_lv0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "lv_size": "7511998464",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "name": "ceph_lv0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "tags": {
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.cluster_name": "ceph",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.crush_device_class": "",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.encrypted": "0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.osd_id": "0",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.type": "block",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:                "ceph.vdo": "0"
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            },
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "type": "block",
Jan 22 10:07:39 np0005592157 practical_moore[332352]:            "vg_name": "ceph_vg0"
Jan 22 10:07:39 np0005592157 practical_moore[332352]:        }
Jan 22 10:07:39 np0005592157 practical_moore[332352]:    ]
Jan 22 10:07:39 np0005592157 practical_moore[332352]: }
Jan 22 10:07:39 np0005592157 systemd[1]: libpod-6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3.scope: Deactivated successfully.
Jan 22 10:07:39 np0005592157 podman[332361]: 2026-01-22 15:07:39.496225345 +0000 UTC m=+0.027793147 container died 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:07:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-68282d05feb88fb18ef483868aa7ce9caa30108fb78998408c9cba05066b08f9-merged.mount: Deactivated successfully.
Jan 22 10:07:39 np0005592157 podman[332361]: 2026-01-22 15:07:39.552581867 +0000 UTC m=+0.084149639 container remove 6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:07:39 np0005592157 systemd[1]: libpod-conmon-6373c6c31db52c13fe9bf3d58cdf8d83b001ddea8d5fe7d78785addde1a988d3.scope: Deactivated successfully.
Jan 22 10:07:39 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.166993555 +0000 UTC m=+0.040440799 container create 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:07:40 np0005592157 systemd[1]: Started libpod-conmon-05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5.scope.
Jan 22 10:07:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.151585915 +0000 UTC m=+0.025033179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.249070562 +0000 UTC m=+0.122517826 container init 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.260124195 +0000 UTC m=+0.133571439 container start 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.264103413 +0000 UTC m=+0.137550657 container attach 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:07:40 np0005592157 youthful_wu[332533]: 167 167
Jan 22 10:07:40 np0005592157 systemd[1]: libpod-05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5.scope: Deactivated successfully.
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.2729191 +0000 UTC m=+0.146366384 container died 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:07:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-69ce85a082f38965b7dbc5039d8e6a9e6cb559d46cd63278be2878f979b06167-merged.mount: Deactivated successfully.
Jan 22 10:07:40 np0005592157 podman[332517]: 2026-01-22 15:07:40.327662992 +0000 UTC m=+0.201110246 container remove 05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:07:40 np0005592157 systemd[1]: libpod-conmon-05dc792043537456dcf0ebf25cb02260cc417bc74cc1523cf05e2cd1552b04b5.scope: Deactivated successfully.
Jan 22 10:07:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:40 np0005592157 podman[332582]: 2026-01-22 15:07:40.54226595 +0000 UTC m=+0.043147046 container create 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:07:40 np0005592157 systemd[1]: Started libpod-conmon-4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa.scope.
Jan 22 10:07:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:07:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6750fc38e9e1477e0612413326f75638c2e129fc069ce07667afcaaaf110e80d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6750fc38e9e1477e0612413326f75638c2e129fc069ce07667afcaaaf110e80d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6750fc38e9e1477e0612413326f75638c2e129fc069ce07667afcaaaf110e80d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6750fc38e9e1477e0612413326f75638c2e129fc069ce07667afcaaaf110e80d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:07:40 np0005592157 podman[332582]: 2026-01-22 15:07:40.524268916 +0000 UTC m=+0.025150032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:07:40 np0005592157 podman[332582]: 2026-01-22 15:07:40.631867732 +0000 UTC m=+0.132748828 container init 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:07:40 np0005592157 podman[332582]: 2026-01-22 15:07:40.6398893 +0000 UTC m=+0.140770396 container start 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:07:40 np0005592157 podman[332582]: 2026-01-22 15:07:40.643164631 +0000 UTC m=+0.144045727 container attach 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:07:40 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:40 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 10:07:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:41.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]: {
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:        "osd_id": 0,
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:        "type": "bluestore"
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]:    }
Jan 22 10:07:41 np0005592157 intelligent_yonath[332622]: }
Jan 22 10:07:41 np0005592157 systemd[1]: libpod-4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa.scope: Deactivated successfully.
Jan 22 10:07:41 np0005592157 podman[332643]: 2026-01-22 15:07:41.539443779 +0000 UTC m=+0.021406280 container died 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:07:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6750fc38e9e1477e0612413326f75638c2e129fc069ce07667afcaaaf110e80d-merged.mount: Deactivated successfully.
Jan 22 10:07:41 np0005592157 podman[332643]: 2026-01-22 15:07:41.592396356 +0000 UTC m=+0.074358877 container remove 4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:07:41 np0005592157 systemd[1]: libpod-conmon-4b272739d97fecc35e97e907127e0d2551a8bd93e998cb8bb52a7e8ef922ebfa.scope: Deactivated successfully.
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 70b23294-bc26-4a7f-865e-ab7e22a53701 does not exist
Jan 22 10:07:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 88a9352d-5522-454a-bf3f-a1bfe98dbe61 does not exist
Jan 22 10:07:41 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9dbf87b7-ed4b-42ab-b35c-d4ee41c98b53 does not exist
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 MiB/s wr, 17 op/s
Jan 22 10:07:43 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:43.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:44 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 22 10:07:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 10:07:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 22 10:07:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 10:07:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:07:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:45.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:07:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:46 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:46 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:07:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:07:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:07:47 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 10:07:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:47.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:07:47
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'images', 'vms', 'cephfs.cephfs.meta']
Jan 22 10:07:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:07:47.643 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:07:47.644 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:07:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:07:47.644 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:07:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 22 10:07:48 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 10:07:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 22 10:07:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:48.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 860 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 32 op/s
Jan 22 10:07:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:49 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 22 10:07:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 10:07:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 22 10:07:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.1 MiB/s wr, 66 op/s
Jan 22 10:07:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:52 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:52 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:53 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Jan 22 10:07:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:54 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 22 10:07:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 22 10:07:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 10:07:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 22 10:07:55 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:56.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:57 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:57 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Jan 22 10:07:57 np0005592157 podman[332716]: 2026-01-22 15:07:57.314056879 +0000 UTC m=+0.052456596 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:07:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:57.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:07:57.625 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:07:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:07:57.627 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:07:58 np0005592157 ceph-mon[74359]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:58 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:07:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:07:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 718 B/s wr, 11 op/s
Jan 22 10:07:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:07:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:59 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 10:08:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:01.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:01 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:01 np0005592157 ceph-mon[74359]: Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:02 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:08:02.629 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:08:03 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 10:08:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:04 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:08:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:08:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 10:08:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:05.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 22 10:08:05 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 10:08:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 22 10:08:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:07 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:07 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 10:08:07 np0005592157 podman[332791]: 2026-01-22 15:08:07.358781552 +0000 UTC m=+0.092545496 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 10:08:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:07.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:08 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:08.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Jan 22 10:08:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:09.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:09 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:10.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.618289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490618785, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 790, "num_deletes": 329, "total_data_size": 845856, "memory_usage": 861624, "flush_reason": "Manual Compaction"}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490626575, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 822633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84900, "largest_seqno": 85689, "table_properties": {"data_size": 818649, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11369, "raw_average_key_size": 21, "raw_value_size": 809707, "raw_average_value_size": 1502, "num_data_blocks": 65, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 329, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094454, "oldest_key_time": 1769094454, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 8350 microseconds, and 2945 cpu microseconds.
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.626645) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 822633 bytes OK
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.626670) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629323) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629348) EVENT_LOG_v1 {"time_micros": 1769094490629341, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 841437, prev total WAL file size 841437, number of live WAL files 2.
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.630142) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353331' seq:0, type:0; will stop at (end)
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(803KB)], [194(9458KB)]
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490630183, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 10507910, "oldest_snapshot_seqno": -1}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 13843 keys, 10339521 bytes, temperature: kUnknown
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490730705, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10339521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10266696, "index_size": 37128, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34629, "raw_key_size": 383241, "raw_average_key_size": 27, "raw_value_size": 10033505, "raw_average_value_size": 724, "num_data_blocks": 1325, "num_entries": 13843, "num_filter_entries": 13843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.731208) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10339521 bytes
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.733457) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.4 rd, 102.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.3) write-amplify(12.6) OK, records in: 14516, records dropped: 673 output_compression: NoCompression
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.733492) EVENT_LOG_v1 {"time_micros": 1769094490733477, "job": 122, "event": "compaction_finished", "compaction_time_micros": 100663, "compaction_time_cpu_micros": 44341, "output_level": 6, "num_output_files": 1, "total_output_size": 10339521, "num_input_records": 14516, "num_output_records": 13843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490733883, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490736965, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.737034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.737041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.737044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.737047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:08:10.737049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:11 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:11 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:12 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:12.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:13 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:15 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:16.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:16 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:16 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:18 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:18.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:19 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:20 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:20 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:21.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:08:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:08:22 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:22 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:23.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:23 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:23 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:25 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:27.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:27 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:27 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:28 np0005592157 podman[332878]: 2026-01-22 15:08:28.331644406 +0000 UTC m=+0.062081164 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:08:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:29 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:30 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:30 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:32 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:32 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:08:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:08:33 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:33.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:34 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:34 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:34.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:35.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:35 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:36 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:36 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:37 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:38 np0005592157 podman[332902]: 2026-01-22 15:08:38.354571391 +0000 UTC m=+0.087928072 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:08:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:38.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:38 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:40.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:40 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:42 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:08:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:08:43 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:43 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:44 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:44 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:08:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:45.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fe6dc795-0dc4-4c3d-94db-a58128271b23 does not exist
Jan 22 10:08:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev de868fd8-7b3c-40c3-ac63-55c749687342 does not exist
Jan 22 10:08:45 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c879bc9e-ffd7-4eba-baea-6603cf4d4645 does not exist
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.201035513 +0000 UTC m=+0.084127608 container create 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.141090373 +0000 UTC m=+0.024182568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:46 np0005592157 systemd[1]: Started libpod-conmon-1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae.scope.
Jan 22 10:08:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.331044203 +0000 UTC m=+0.214136328 container init 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.338254561 +0000 UTC m=+0.221346656 container start 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.342203118 +0000 UTC m=+0.225295223 container attach 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:08:46 np0005592157 elastic_cerf[333269]: 167 167
Jan 22 10:08:46 np0005592157 systemd[1]: libpod-1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae.scope: Deactivated successfully.
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.343805848 +0000 UTC m=+0.226897953 container died 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:08:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f851ea95a53ec7edb41f2033379d5986b95e1784e581f667fc97063bddb3cdd3-merged.mount: Deactivated successfully.
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:08:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:08:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:46 np0005592157 podman[333253]: 2026-01-22 15:08:46.708605614 +0000 UTC m=+0.591697739 container remove 1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:08:46 np0005592157 systemd[1]: libpod-conmon-1ae2d887e8174dea9ad6a7fc74be82527c9d5c5d3b4a7b810b08667a2b392aae.scope: Deactivated successfully.
Jan 22 10:08:46 np0005592157 podman[333295]: 2026-01-22 15:08:46.964830569 +0000 UTC m=+0.083974525 container create de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:08:47 np0005592157 podman[333295]: 2026-01-22 15:08:46.907089563 +0000 UTC m=+0.026233499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:47 np0005592157 systemd[1]: Started libpod-conmon-de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2.scope.
Jan 22 10:08:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:47 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:47 np0005592157 podman[333295]: 2026-01-22 15:08:47.260383635 +0000 UTC m=+0.379527561 container init de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:08:47 np0005592157 podman[333295]: 2026-01-22 15:08:47.267760208 +0000 UTC m=+0.386904124 container start de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:08:47 np0005592157 podman[333295]: 2026-01-22 15:08:47.271215843 +0000 UTC m=+0.390359779 container attach de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:47 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:08:47
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'vms']
Jan 22 10:08:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:08:47.644 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:08:47.646 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:08:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:08:47.647 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:08:48 np0005592157 eager_williams[333311]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:08:48 np0005592157 eager_williams[333311]: --> relative data size: 1.0
Jan 22 10:08:48 np0005592157 eager_williams[333311]: --> All data devices are unavailable
Jan 22 10:08:48 np0005592157 systemd[1]: libpod-de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2.scope: Deactivated successfully.
Jan 22 10:08:48 np0005592157 podman[333295]: 2026-01-22 15:08:48.120670464 +0000 UTC m=+1.239814410 container died de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:08:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-539bf14894854db6f7b54ca38e695eb2cc05201c36c9c29a0aff3fb2be943714-merged.mount: Deactivated successfully.
Jan 22 10:08:48 np0005592157 podman[333295]: 2026-01-22 15:08:48.245468985 +0000 UTC m=+1.364612941 container remove de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:08:48 np0005592157 systemd[1]: libpod-conmon-de73e0f0255c795a8b5f454f6f98e7080c4911331e02a1ba77b6a493a53c55b2.scope: Deactivated successfully.
Jan 22 10:08:48 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:48 np0005592157 podman[333482]: 2026-01-22 15:08:48.868262091 +0000 UTC m=+0.053151633 container create 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:08:48 np0005592157 systemd[1]: Started libpod-conmon-968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088.scope.
Jan 22 10:08:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:48 np0005592157 podman[333482]: 2026-01-22 15:08:48.836176219 +0000 UTC m=+0.021065741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:49 np0005592157 podman[333482]: 2026-01-22 15:08:49.287431569 +0000 UTC m=+0.472321101 container init 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:08:49 np0005592157 podman[333482]: 2026-01-22 15:08:49.300245726 +0000 UTC m=+0.485135228 container start 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:08:49 np0005592157 crazy_lederberg[333499]: 167 167
Jan 22 10:08:49 np0005592157 systemd[1]: libpod-968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088.scope: Deactivated successfully.
Jan 22 10:08:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:49.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:49 np0005592157 podman[333482]: 2026-01-22 15:08:49.716914503 +0000 UTC m=+0.901804005 container attach 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:08:49 np0005592157 podman[333482]: 2026-01-22 15:08:49.717435536 +0000 UTC m=+0.902325038 container died 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:08:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-949cdfda60c7acc785de88ff3dab4e895e15bcf2abebec832fab81933926f9bd-merged.mount: Deactivated successfully.
Jan 22 10:08:49 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:49 np0005592157 podman[333482]: 2026-01-22 15:08:49.924384105 +0000 UTC m=+1.109273607 container remove 968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:08:50 np0005592157 systemd[1]: libpod-conmon-968fa1a746285645a50d6e160983b0c825778889954ef0db089e69b6ccb03088.scope: Deactivated successfully.
Jan 22 10:08:50 np0005592157 podman[333526]: 2026-01-22 15:08:50.129054118 +0000 UTC m=+0.068299398 container create 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:50 np0005592157 podman[333526]: 2026-01-22 15:08:50.087580244 +0000 UTC m=+0.026825614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:50 np0005592157 systemd[1]: Started libpod-conmon-18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d.scope.
Jan 22 10:08:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e019377605e72a158eef5e0a5623595df1cffc60a72c450830655a77e07ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e019377605e72a158eef5e0a5623595df1cffc60a72c450830655a77e07ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e019377605e72a158eef5e0a5623595df1cffc60a72c450830655a77e07ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e019377605e72a158eef5e0a5623595df1cffc60a72c450830655a77e07ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:50 np0005592157 podman[333526]: 2026-01-22 15:08:50.25511084 +0000 UTC m=+0.194356170 container init 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:08:50 np0005592157 podman[333526]: 2026-01-22 15:08:50.272091269 +0000 UTC m=+0.211336549 container start 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:08:50 np0005592157 podman[333526]: 2026-01-22 15:08:50.383814677 +0000 UTC m=+0.323059987 container attach 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:08:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:50 np0005592157 sad_booth[333543]: {
Jan 22 10:08:50 np0005592157 sad_booth[333543]:    "0": [
Jan 22 10:08:50 np0005592157 sad_booth[333543]:        {
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "devices": [
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "/dev/loop3"
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            ],
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "lv_name": "ceph_lv0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "lv_size": "7511998464",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "name": "ceph_lv0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "tags": {
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.cluster_name": "ceph",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.crush_device_class": "",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.encrypted": "0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.osd_id": "0",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.type": "block",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:                "ceph.vdo": "0"
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            },
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "type": "block",
Jan 22 10:08:50 np0005592157 sad_booth[333543]:            "vg_name": "ceph_vg0"
Jan 22 10:08:50 np0005592157 sad_booth[333543]:        }
Jan 22 10:08:50 np0005592157 sad_booth[333543]:    ]
Jan 22 10:08:50 np0005592157 sad_booth[333543]: }
Jan 22 10:08:51 np0005592157 systemd[1]: libpod-18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d.scope: Deactivated successfully.
Jan 22 10:08:51 np0005592157 podman[333526]: 2026-01-22 15:08:51.016611519 +0000 UTC m=+0.955856799 container died 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:08:51 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:51 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-61e019377605e72a158eef5e0a5623595df1cffc60a72c450830655a77e07ba4-merged.mount: Deactivated successfully.
Jan 22 10:08:52 np0005592157 podman[333526]: 2026-01-22 15:08:52.452722853 +0000 UTC m=+2.391968133 container remove 18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:08:52 np0005592157 systemd[1]: libpod-conmon-18fe657bd8e06dad7174b455133ee4691191623a6282b2ebb5711792557b8d4d.scope: Deactivated successfully.
Jan 22 10:08:52 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:52 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.134567377 +0000 UTC m=+0.091535101 container create 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.067879821 +0000 UTC m=+0.024847625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:53 np0005592157 systemd[1]: Started libpod-conmon-96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2.scope.
Jan 22 10:08:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.415641906 +0000 UTC m=+0.372609640 container init 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.428261048 +0000 UTC m=+0.385228782 container start 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:53 np0005592157 affectionate_moore[333726]: 167 167
Jan 22 10:08:53 np0005592157 systemd[1]: libpod-96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2.scope: Deactivated successfully.
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.45466618 +0000 UTC m=+0.411633954 container attach 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 10:08:53 np0005592157 podman[333710]: 2026-01-22 15:08:53.45548231 +0000 UTC m=+0.412450064 container died 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:53.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b95eb9f80878bb2d8c9e01ce9bcda0cb3b86dacfe07da60fd6d8b09f7ceed390-merged.mount: Deactivated successfully.
Jan 22 10:08:53 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:54 np0005592157 podman[333710]: 2026-01-22 15:08:54.114065759 +0000 UTC m=+1.071033503 container remove 96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:08:54 np0005592157 systemd[1]: libpod-conmon-96de48d65c7f7b83cfdbb3549de4413473fdf2682bac191590fe7d98499bbdb2.scope: Deactivated successfully.
Jan 22 10:08:54 np0005592157 podman[333753]: 2026-01-22 15:08:54.258367241 +0000 UTC m=+0.025645634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:54 np0005592157 podman[333753]: 2026-01-22 15:08:54.760244932 +0000 UTC m=+0.527523315 container create b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:54 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:54 np0005592157 systemd[1]: Started libpod-conmon-b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5.scope.
Jan 22 10:08:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:08:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edec24e6f900e434683603aa58fe0cda84805833815d28ea4123bd857e6aab7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edec24e6f900e434683603aa58fe0cda84805833815d28ea4123bd857e6aab7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edec24e6f900e434683603aa58fe0cda84805833815d28ea4123bd857e6aab7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edec24e6f900e434683603aa58fe0cda84805833815d28ea4123bd857e6aab7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:54 np0005592157 podman[333753]: 2026-01-22 15:08:54.982774135 +0000 UTC m=+0.750052498 container init b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:08:54 np0005592157 podman[333753]: 2026-01-22 15:08:54.992563927 +0000 UTC m=+0.759842280 container start b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:08:55 np0005592157 podman[333753]: 2026-01-22 15:08:55.0818103 +0000 UTC m=+0.849088643 container attach b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:08:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:55 np0005592157 silly_galileo[333770]: {
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:        "osd_id": 0,
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:        "type": "bluestore"
Jan 22 10:08:55 np0005592157 silly_galileo[333770]:    }
Jan 22 10:08:55 np0005592157 silly_galileo[333770]: }
Jan 22 10:08:55 np0005592157 systemd[1]: libpod-b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5.scope: Deactivated successfully.
Jan 22 10:08:55 np0005592157 podman[333753]: 2026-01-22 15:08:55.830273458 +0000 UTC m=+1.597551831 container died b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:08:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-edec24e6f900e434683603aa58fe0cda84805833815d28ea4123bd857e6aab7d-merged.mount: Deactivated successfully.
Jan 22 10:08:56 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:56 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:08:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:08:57 np0005592157 podman[333753]: 2026-01-22 15:08:57.137990204 +0000 UTC m=+2.905268547 container remove b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:08:57 np0005592157 systemd[1]: libpod-conmon-b0e7cd1baa94163ab3aa3758be208cc78b8f9e4d1e53e52710e6077ef98626b5.scope: Deactivated successfully.
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:08:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:08:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:08:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5993cc3d-f40d-4edc-9d0e-dcc2e14d71e7 does not exist
Jan 22 10:08:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 01480146-741e-4831-bf36-93e2dfd2ad36 does not exist
Jan 22 10:08:57 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ec30497e-a251-4f76-8d98-73af2afd7b5d does not exist
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:08:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:08:58 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:58 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:08:59 np0005592157 podman[333857]: 2026-01-22 15:08:59.309711088 +0000 UTC m=+0.049914273 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:08:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:08:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:59.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:00 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:00.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:01 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:01 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:02 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:02.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:03 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:04.103 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:09:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:04.105 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:09:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:09:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:09:04 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:06 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:06 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:06.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:07 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:08 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:08.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:09 np0005592157 podman[333932]: 2026-01-22 15:09:09.353958608 +0000 UTC m=+0.089818799 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:09:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:10 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:11 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:11 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:12.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:12 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:13 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:14 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:14.107 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:09:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:14 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:15 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:16 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:17.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:17 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:18 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:19.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:19 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:20 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:22 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:23 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:24 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:25 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:26.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:27 np0005592157 ceph-mon[74359]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:27 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:27 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:27.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:28 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:28 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:29 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:30 np0005592157 podman[334019]: 2026-01-22 15:09:30.316848173 +0000 UTC m=+0.049627836 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:09:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:30.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:31 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:31 np0005592157 ceph-mon[74359]: Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:32 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:33 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:33.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:34 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:35 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:35.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:36 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:36 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:36.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:37 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:37.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:38 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:38.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:39 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:40 np0005592157 podman[334043]: 2026-01-22 15:09:40.146343473 +0000 UTC m=+0.123846178 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:09:40 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:41 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:09:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:09:42 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:43 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:43.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:44 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:45.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:45 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:09:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:09:46 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:46.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:09:47
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'volumes', 'backups', '.rgw.root']
Jan 22 10:09:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:47.645 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:47.645 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:09:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:09:47.645 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:09:47 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:48 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:49.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:49 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:50 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:50 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:51 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:52 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:54 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:54.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:55 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:55.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:56 np0005592157 ceph-mon[74359]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:56 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:09:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:56.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:09:57 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:09:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 10:09:58 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:09:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:09:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:00.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:10:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:10:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:01 np0005592157 podman[334263]: 2026-01-22 15:10:01.338971515 +0000 UTC m=+0.068975174 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 10:10:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bd5444dc-2846-4195-8841-4dcb6092e33b does not exist
Jan 22 10:10:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9331578b-6994-435c-8db0-6543bfb89e88 does not exist
Jan 22 10:10:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f64b0a2b-6822-4924-9f6a-987e65a0c3bc does not exist
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:10:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:10:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.198058022 +0000 UTC m=+0.046665853 container create 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:10:03 np0005592157 systemd[1]: Started libpod-conmon-0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30.scope.
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:10:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.170289716 +0000 UTC m=+0.018897567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.282034165 +0000 UTC m=+0.130642086 container init 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.294438561 +0000 UTC m=+0.143046432 container start 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.298526932 +0000 UTC m=+0.147134853 container attach 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:10:03 np0005592157 brave_bhaskara[334490]: 167 167
Jan 22 10:10:03 np0005592157 systemd[1]: libpod-0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30.scope: Deactivated successfully.
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.299985868 +0000 UTC m=+0.148593699 container died 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:10:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1b9f1f6890609218c4bf9c59188c94c3d420329ca5e907490ad116e772a7963e-merged.mount: Deactivated successfully.
Jan 22 10:10:03 np0005592157 podman[334474]: 2026-01-22 15:10:03.33935841 +0000 UTC m=+0.187966241 container remove 0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:10:03 np0005592157 systemd[1]: libpod-conmon-0cc674d852df3c3f0d1fe79e1143c05db25a54f05cb81c746b34743f6a541f30.scope: Deactivated successfully.
Jan 22 10:10:03 np0005592157 podman[334515]: 2026-01-22 15:10:03.55483343 +0000 UTC m=+0.048666153 container create de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:10:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:03.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:03 np0005592157 systemd[1]: Started libpod-conmon-de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756.scope.
Jan 22 10:10:03 np0005592157 podman[334515]: 2026-01-22 15:10:03.532815536 +0000 UTC m=+0.026648269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:03 np0005592157 podman[334515]: 2026-01-22 15:10:03.664774804 +0000 UTC m=+0.158607547 container init de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:10:03 np0005592157 podman[334515]: 2026-01-22 15:10:03.674245518 +0000 UTC m=+0.168078211 container start de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:10:03 np0005592157 podman[334515]: 2026-01-22 15:10:03.678677977 +0000 UTC m=+0.172510720 container attach de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:10:04 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:04 np0005592157 vigorous_ellis[334531]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:10:04 np0005592157 vigorous_ellis[334531]: --> relative data size: 1.0
Jan 22 10:10:04 np0005592157 vigorous_ellis[334531]: --> All data devices are unavailable
Jan 22 10:10:04 np0005592157 systemd[1]: libpod-de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756.scope: Deactivated successfully.
Jan 22 10:10:04 np0005592157 podman[334515]: 2026-01-22 15:10:04.514889292 +0000 UTC m=+1.008721995 container died de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:10:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2d234f0b4c31fa605c6ce10057bc62f36c5020c620723f368d110f4ac966fabd-merged.mount: Deactivated successfully.
Jan 22 10:10:04 np0005592157 podman[334515]: 2026-01-22 15:10:04.572856923 +0000 UTC m=+1.066689616 container remove de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:10:04 np0005592157 systemd[1]: libpod-conmon-de3ad3e5c4ced4d9d56ca9ffe2dcd63277d2bf9c0f0a9bee6f05970ba7988756.scope: Deactivated successfully.
Jan 22 10:10:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:10:04 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:10:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:05.009 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:10:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:05.010 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:10:05 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.342680648 +0000 UTC m=+0.076936750 container create b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:10:05 np0005592157 systemd[1]: Started libpod-conmon-b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a.scope.
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.312088583 +0000 UTC m=+0.046344715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.432413864 +0000 UTC m=+0.166670046 container init b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.44320886 +0000 UTC m=+0.177464972 container start b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.448142652 +0000 UTC m=+0.182398764 container attach b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:10:05 np0005592157 kind_booth[334718]: 167 167
Jan 22 10:10:05 np0005592157 systemd[1]: libpod-b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a.scope: Deactivated successfully.
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.449258749 +0000 UTC m=+0.183514831 container died b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:10:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cb5760db24606e1406619f6258c2eadcdac4c77843c43b39d92529d32d19e776-merged.mount: Deactivated successfully.
Jan 22 10:10:05 np0005592157 podman[334701]: 2026-01-22 15:10:05.501175561 +0000 UTC m=+0.235431673 container remove b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:10:05 np0005592157 systemd[1]: libpod-conmon-b08dc539a0b5658aa0f4e30dff0626730c77061bae3161de9116e8de6beab72a.scope: Deactivated successfully.
Jan 22 10:10:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:10:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:05.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:10:05 np0005592157 podman[334743]: 2026-01-22 15:10:05.741592056 +0000 UTC m=+0.054350343 container create 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 22 10:10:05 np0005592157 systemd[1]: Started libpod-conmon-0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68.scope.
Jan 22 10:10:05 np0005592157 podman[334743]: 2026-01-22 15:10:05.721268514 +0000 UTC m=+0.034026831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2f503e2771431b1b872abbe43990980b9735f0d8d2c9f3487edda690ce803/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2f503e2771431b1b872abbe43990980b9735f0d8d2c9f3487edda690ce803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2f503e2771431b1b872abbe43990980b9735f0d8d2c9f3487edda690ce803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2f503e2771431b1b872abbe43990980b9735f0d8d2c9f3487edda690ce803/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:05 np0005592157 podman[334743]: 2026-01-22 15:10:05.857582869 +0000 UTC m=+0.170341236 container init 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:10:05 np0005592157 podman[334743]: 2026-01-22 15:10:05.866983061 +0000 UTC m=+0.179741358 container start 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:10:05 np0005592157 podman[334743]: 2026-01-22 15:10:05.87099061 +0000 UTC m=+0.183748897 container attach 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:10:06 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:06 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]: {
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:    "0": [
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:        {
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "devices": [
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "/dev/loop3"
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            ],
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "lv_name": "ceph_lv0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "lv_size": "7511998464",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "name": "ceph_lv0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "tags": {
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.cluster_name": "ceph",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.crush_device_class": "",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.encrypted": "0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.osd_id": "0",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.type": "block",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:                "ceph.vdo": "0"
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            },
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "type": "block",
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:            "vg_name": "ceph_vg0"
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:        }
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]:    ]
Jan 22 10:10:06 np0005592157 laughing_perlman[334760]: }
Jan 22 10:10:06 np0005592157 systemd[1]: libpod-0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68.scope: Deactivated successfully.
Jan 22 10:10:06 np0005592157 podman[334743]: 2026-01-22 15:10:06.642883037 +0000 UTC m=+0.955641314 container died 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 22 10:10:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-54d2f503e2771431b1b872abbe43990980b9735f0d8d2c9f3487edda690ce803-merged.mount: Deactivated successfully.
Jan 22 10:10:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:06.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:06 np0005592157 podman[334743]: 2026-01-22 15:10:06.697875534 +0000 UTC m=+1.010633811 container remove 0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:10:06 np0005592157 systemd[1]: libpod-conmon-0117a83155c31ab693979655d7f1a60656403555a2ac387291272452ac1afc68.scope: Deactivated successfully.
Jan 22 10:10:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:07 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.337251089 +0000 UTC m=+0.047906583 container create 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:10:07 np0005592157 systemd[1]: Started libpod-conmon-31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224.scope.
Jan 22 10:10:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.316827865 +0000 UTC m=+0.027483399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.4292386 +0000 UTC m=+0.139894374 container init 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.44179758 +0000 UTC m=+0.152453064 container start 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.446456685 +0000 UTC m=+0.157112139 container attach 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:10:07 np0005592157 systemd[1]: libpod-31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224.scope: Deactivated successfully.
Jan 22 10:10:07 np0005592157 jovial_shamir[334939]: 167 167
Jan 22 10:10:07 np0005592157 conmon[334939]: conmon 31b38d3544ca2321ec3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224.scope/container/memory.events
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.450005413 +0000 UTC m=+0.160660867 container died 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:10:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d9dee56665ed16a655a24fa4f2a3f62214c6af31730616ce8b9ccc3d806a06c4-merged.mount: Deactivated successfully.
Jan 22 10:10:07 np0005592157 podman[334923]: 2026-01-22 15:10:07.486870283 +0000 UTC m=+0.197525737 container remove 31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:10:07 np0005592157 systemd[1]: libpod-conmon-31b38d3544ca2321ec3b7066a98b8852eb6290e916490f20117f91187e044224.scope: Deactivated successfully.
Jan 22 10:10:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:07.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:07 np0005592157 podman[334964]: 2026-01-22 15:10:07.678304039 +0000 UTC m=+0.047639607 container create 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:10:07 np0005592157 systemd[1]: Started libpod-conmon-3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95.scope.
Jan 22 10:10:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:10:07 np0005592157 podman[334964]: 2026-01-22 15:10:07.657390773 +0000 UTC m=+0.026726331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:10:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168cb9c3d41bd29c46f291e3a8500c216c3161466caa8b9db4abdd12b15483c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168cb9c3d41bd29c46f291e3a8500c216c3161466caa8b9db4abdd12b15483c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168cb9c3d41bd29c46f291e3a8500c216c3161466caa8b9db4abdd12b15483c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168cb9c3d41bd29c46f291e3a8500c216c3161466caa8b9db4abdd12b15483c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:10:07 np0005592157 podman[334964]: 2026-01-22 15:10:07.786818258 +0000 UTC m=+0.156153836 container init 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:10:07 np0005592157 podman[334964]: 2026-01-22 15:10:07.793035312 +0000 UTC m=+0.162370850 container start 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:10:07 np0005592157 podman[334964]: 2026-01-22 15:10:07.796824015 +0000 UTC m=+0.166159543 container attach 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:10:08 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:08 np0005592157 awesome_keller[334982]: {
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:        "osd_id": 0,
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:        "type": "bluestore"
Jan 22 10:10:08 np0005592157 awesome_keller[334982]:    }
Jan 22 10:10:08 np0005592157 awesome_keller[334982]: }
Jan 22 10:10:08 np0005592157 systemd[1]: libpod-3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95.scope: Deactivated successfully.
Jan 22 10:10:08 np0005592157 podman[334964]: 2026-01-22 15:10:08.657113154 +0000 UTC m=+1.026448692 container died 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:10:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b168cb9c3d41bd29c46f291e3a8500c216c3161466caa8b9db4abdd12b15483c-merged.mount: Deactivated successfully.
Jan 22 10:10:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:10:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:10:08 np0005592157 podman[334964]: 2026-01-22 15:10:08.723292498 +0000 UTC m=+1.092628036 container remove 3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:10:08 np0005592157 systemd[1]: libpod-conmon-3ba9a6c103e9c0d0d308c847e6371d3736bd53afe808c3aa85f81edc86187e95.scope: Deactivated successfully.
Jan 22 10:10:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:10:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:10:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bd579bc5-23ee-4b33-b96c-d8bb59cfda6d does not exist
Jan 22 10:10:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fcd2af5a-108f-4445-9ac5-afe804c1d93a does not exist
Jan 22 10:10:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev db25d4b9-cc83-4e33-83c8-e7753774528b does not exist
Jan 22 10:10:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:09.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:09 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:10 np0005592157 podman[335068]: 2026-01-22 15:10:10.468040641 +0000 UTC m=+0.189595531 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 10:10:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:10.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:10 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:10 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:11 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:12 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:12.012 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:10:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:12 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:14 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:15 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:16 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:16 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:18 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:10:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:10:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:10:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:10:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:18.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:19 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:20 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:21 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:22.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:22 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:22 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:23 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:24.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:25 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:26 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:26 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:10:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.380273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627380312, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 487, "total_data_size": 2770947, "memory_usage": 2810432, "flush_reason": "Manual Compaction"}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627398322, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 2692767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85690, "largest_seqno": 87740, "table_properties": {"data_size": 2684212, "index_size": 4536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26787, "raw_average_key_size": 22, "raw_value_size": 2663684, "raw_average_value_size": 2282, "num_data_blocks": 194, "num_entries": 1167, "num_filter_entries": 1167, "num_deletions": 487, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094490, "oldest_key_time": 1769094490, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 18090 microseconds, and 6332 cpu microseconds.
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398364) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 2692767 bytes OK
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398380) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400157) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400180) EVENT_LOG_v1 {"time_micros": 1769094627400173, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400203) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 2761155, prev total WAL file size 2761155, number of live WAL files 2.
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.401219) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(2629KB)], [197(10097KB)]
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627401259, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 13032288, "oldest_snapshot_seqno": -1}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 14019 keys, 11311147 bytes, temperature: kUnknown
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627495170, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 11311147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11235979, "index_size": 39023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386662, "raw_average_key_size": 27, "raw_value_size": 10998439, "raw_average_value_size": 784, "num_data_blocks": 1405, "num_entries": 14019, "num_filter_entries": 14019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.495448) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11311147 bytes
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.497040) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 15010, records dropped: 991 output_compression: NoCompression
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.497062) EVENT_LOG_v1 {"time_micros": 1769094627497051, "job": 124, "event": "compaction_finished", "compaction_time_micros": 93992, "compaction_time_cpu_micros": 38019, "output_level": 6, "num_output_files": 1, "total_output_size": 11311147, "num_input_records": 15010, "num_output_records": 14019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627497732, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627499966, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.401160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.500053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.500063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.500066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.500069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:10:27.500072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:28 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:29 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:29.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:30 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:10:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:10:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:31 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:31.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:32 np0005592157 podman[335156]: 2026-01-22 15:10:32.324854208 +0000 UTC m=+0.057718716 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:10:32 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:33.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:33 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:35 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:35.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:36 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:36 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:37 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:10:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:37.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:10:38 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:39 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:39.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:40 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:41 np0005592157 podman[335181]: 2026-01-22 15:10:41.350701927 +0000 UTC m=+0.087730977 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:10:41 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:41 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:42 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:43.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:44 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:44.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:45.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:10:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:10:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:46 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:46.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:10:47
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'backups']
Jan 22 10:10:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:47.646 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:47.647 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:10:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:10:47.647 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:10:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:47.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:48 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:48 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:49 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:50 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:50.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:51 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:52 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:52 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:52.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:53 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:54 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:54.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:55 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:56 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:56 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:56.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:57 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:57 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:57.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:10:58 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:59 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:10:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:10:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:10:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:59.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:00 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:01.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:01 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:11:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:02.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:11:03 np0005592157 podman[335318]: 2026-01-22 15:11:03.32479433 +0000 UTC m=+0.053659196 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 10:11:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:03.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:04.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:04 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:04 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002791918213753955 of space, bias 1.0, pg target 0.8264077912711707 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:11:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:11:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:05.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:11:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:05.834 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:05 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:05.835 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:05 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:06.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:07 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:07 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:08 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:08.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:09 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:09.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 09eb9242-e7e3-4206-9ad7-1882b0e2433f does not exist
Jan 22 10:11:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2d98cbcb-2be1-4ac3-8cbf-8f5685390190 does not exist
Jan 22 10:11:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cae351d4-af82-4d64-9473-7e9c80731699 does not exist
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:11:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:10.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:10 np0005592157 podman[335612]: 2026-01-22 15:11:10.925315371 +0000 UTC m=+0.055614424 container create 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:11:10 np0005592157 systemd[1]: Started libpod-conmon-07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06.scope.
Jan 22 10:11:10 np0005592157 podman[335612]: 2026-01-22 15:11:10.899330729 +0000 UTC m=+0.029629782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:11 np0005592157 podman[335612]: 2026-01-22 15:11:11.041102429 +0000 UTC m=+0.171401472 container init 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:11:11 np0005592157 podman[335612]: 2026-01-22 15:11:11.05246749 +0000 UTC m=+0.182766493 container start 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:11:11 np0005592157 podman[335612]: 2026-01-22 15:11:11.056073899 +0000 UTC m=+0.186372922 container attach 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 10:11:11 np0005592157 youthful_dijkstra[335628]: 167 167
Jan 22 10:11:11 np0005592157 systemd[1]: libpod-07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06.scope: Deactivated successfully.
Jan 22 10:11:11 np0005592157 conmon[335628]: conmon 07c1890152a8fd649525 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06.scope/container/memory.events
Jan 22 10:11:11 np0005592157 podman[335612]: 2026-01-22 15:11:11.061691397 +0000 UTC m=+0.191990410 container died 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:11:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-80e66cee536b9516b49fdfc038fa6392e8b17609a94190e50bba32ac78494ab1-merged.mount: Deactivated successfully.
Jan 22 10:11:11 np0005592157 podman[335612]: 2026-01-22 15:11:11.106728019 +0000 UTC m=+0.237027032 container remove 07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:11:11 np0005592157 systemd[1]: libpod-conmon-07c1890152a8fd649525d7f140f673dbb4118e82d261cb16b0b881ba00de8d06.scope: Deactivated successfully.
Jan 22 10:11:11 np0005592157 podman[335652]: 2026-01-22 15:11:11.320441015 +0000 UTC m=+0.060241508 container create bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 22 10:11:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:11 np0005592157 systemd[1]: Started libpod-conmon-bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c.scope.
Jan 22 10:11:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:11 np0005592157 podman[335652]: 2026-01-22 15:11:11.302225986 +0000 UTC m=+0.042026499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:11 np0005592157 podman[335652]: 2026-01-22 15:11:11.399799515 +0000 UTC m=+0.139600038 container init bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:11:11 np0005592157 podman[335652]: 2026-01-22 15:11:11.413621626 +0000 UTC m=+0.153422119 container start bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:11:11 np0005592157 podman[335652]: 2026-01-22 15:11:11.417901382 +0000 UTC m=+0.157701985 container attach bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:11:11 np0005592157 podman[335671]: 2026-01-22 15:11:11.511316568 +0000 UTC m=+0.117878451 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 10:11:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:11:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:11:11 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:12 np0005592157 sad_dewdney[335669]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:11:12 np0005592157 sad_dewdney[335669]: --> relative data size: 1.0
Jan 22 10:11:12 np0005592157 sad_dewdney[335669]: --> All data devices are unavailable
Jan 22 10:11:12 np0005592157 systemd[1]: libpod-bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c.scope: Deactivated successfully.
Jan 22 10:11:12 np0005592157 podman[335652]: 2026-01-22 15:11:12.280917688 +0000 UTC m=+1.020718171 container died bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:11:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-db8e102a65e62c02da8f8127e30ecdf764138d501f3944398e2bddc6d305ffa8-merged.mount: Deactivated successfully.
Jan 22 10:11:12 np0005592157 podman[335652]: 2026-01-22 15:11:12.340987631 +0000 UTC m=+1.080788114 container remove bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:12 np0005592157 systemd[1]: libpod-conmon-bf2dbcb9e318e64e6fdebeab7b9cca058f1dd7e42861ea241850f49e6e1c598c.scope: Deactivated successfully.
Jan 22 10:11:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:12.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:12 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:12 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.042736116 +0000 UTC m=+0.057036960 container create 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:11:13 np0005592157 systemd[1]: Started libpod-conmon-75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882.scope.
Jan 22 10:11:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.026536696 +0000 UTC m=+0.040837550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.122343471 +0000 UTC m=+0.136644325 container init 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.128410281 +0000 UTC m=+0.142711125 container start 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:11:13 np0005592157 gallant_goldberg[335883]: 167 167
Jan 22 10:11:13 np0005592157 systemd[1]: libpod-75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882.scope: Deactivated successfully.
Jan 22 10:11:13 np0005592157 conmon[335883]: conmon 75be3ea747af04835676 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882.scope/container/memory.events
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.133567068 +0000 UTC m=+0.147867922 container attach 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.134759777 +0000 UTC m=+0.149060611 container died 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:11:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-158bef5ade61e4879f43dfab1521e556196d6ff063fc50e0699e29a31a152ea5-merged.mount: Deactivated successfully.
Jan 22 10:11:13 np0005592157 podman[335867]: 2026-01-22 15:11:13.17335387 +0000 UTC m=+0.187654714 container remove 75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldberg, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:11:13 np0005592157 systemd[1]: libpod-conmon-75be3ea747af04835676b928250a60c37ca6c6631e0d829748c75258fe69d882.scope: Deactivated successfully.
Jan 22 10:11:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:13 np0005592157 podman[335907]: 2026-01-22 15:11:13.372026195 +0000 UTC m=+0.061298624 container create 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:11:13 np0005592157 systemd[1]: Started libpod-conmon-9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997.scope.
Jan 22 10:11:13 np0005592157 podman[335907]: 2026-01-22 15:11:13.348785381 +0000 UTC m=+0.038057860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f9f619fa0bb3337af24fd421eaa2a80b4cdff69a6e5b7e521076e88156fbce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f9f619fa0bb3337af24fd421eaa2a80b4cdff69a6e5b7e521076e88156fbce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f9f619fa0bb3337af24fd421eaa2a80b4cdff69a6e5b7e521076e88156fbce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f9f619fa0bb3337af24fd421eaa2a80b4cdff69a6e5b7e521076e88156fbce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:13 np0005592157 podman[335907]: 2026-01-22 15:11:13.470552377 +0000 UTC m=+0.159824856 container init 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:11:13 np0005592157 podman[335907]: 2026-01-22 15:11:13.480904153 +0000 UTC m=+0.170176592 container start 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:11:13 np0005592157 podman[335907]: 2026-01-22 15:11:13.484610324 +0000 UTC m=+0.173882793 container attach 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:11:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:13.837 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:11:14 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:14 np0005592157 confident_gauss[335924]: {
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:    "0": [
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:        {
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "devices": [
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "/dev/loop3"
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            ],
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "lv_name": "ceph_lv0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "lv_size": "7511998464",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "name": "ceph_lv0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "tags": {
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.cluster_name": "ceph",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.crush_device_class": "",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.encrypted": "0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.osd_id": "0",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.type": "block",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:                "ceph.vdo": "0"
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            },
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "type": "block",
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:            "vg_name": "ceph_vg0"
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:        }
Jan 22 10:11:14 np0005592157 confident_gauss[335924]:    ]
Jan 22 10:11:14 np0005592157 confident_gauss[335924]: }
Jan 22 10:11:14 np0005592157 systemd[1]: libpod-9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997.scope: Deactivated successfully.
Jan 22 10:11:14 np0005592157 podman[335907]: 2026-01-22 15:11:14.311688892 +0000 UTC m=+1.000961321 container died 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:11:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-71f9f619fa0bb3337af24fd421eaa2a80b4cdff69a6e5b7e521076e88156fbce-merged.mount: Deactivated successfully.
Jan 22 10:11:14 np0005592157 podman[335907]: 2026-01-22 15:11:14.369950161 +0000 UTC m=+1.059222600 container remove 9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gauss, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:14 np0005592157 systemd[1]: libpod-conmon-9fc61bb2229376eca4b5930dc3ae1219568aff529339e0a0401fbdeebc481997.scope: Deactivated successfully.
Jan 22 10:11:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:14.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.161031261 +0000 UTC m=+0.054111697 container create 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:11:15 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:15 np0005592157 systemd[1]: Started libpod-conmon-4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e.scope.
Jan 22 10:11:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.140974986 +0000 UTC m=+0.034055452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.239325814 +0000 UTC m=+0.132406240 container init 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.250999562 +0000 UTC m=+0.144080018 container start 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:11:15 np0005592157 systemd[1]: libpod-4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e.scope: Deactivated successfully.
Jan 22 10:11:15 np0005592157 sad_varahamihira[336101]: 167 167
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.254479898 +0000 UTC m=+0.147560364 container attach 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:11:15 np0005592157 conmon[336101]: conmon 4c6ef137714565fdde8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e.scope/container/memory.events
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.255467483 +0000 UTC m=+0.148547939 container died 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2413085558ec22d3d11829f440ea3860f8ce5579673bfa029439d1a818fa27da-merged.mount: Deactivated successfully.
Jan 22 10:11:15 np0005592157 podman[336085]: 2026-01-22 15:11:15.296826664 +0000 UTC m=+0.189907100 container remove 4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:11:15 np0005592157 systemd[1]: libpod-conmon-4c6ef137714565fdde8e9d76da8f72dbb7550b9296ccfc0365ab1af861f0f63e.scope: Deactivated successfully.
Jan 22 10:11:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:15 np0005592157 podman[336126]: 2026-01-22 15:11:15.4951563 +0000 UTC m=+0.049567685 container create 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:15 np0005592157 systemd[1]: Started libpod-conmon-4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86.scope.
Jan 22 10:11:15 np0005592157 podman[336126]: 2026-01-22 15:11:15.468081512 +0000 UTC m=+0.022492937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:11:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:11:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fc0c5161af6283d8c358c243c6734e6cffdc0d4e3b7244ce42d5f8cfd4a365/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fc0c5161af6283d8c358c243c6734e6cffdc0d4e3b7244ce42d5f8cfd4a365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fc0c5161af6283d8c358c243c6734e6cffdc0d4e3b7244ce42d5f8cfd4a365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fc0c5161af6283d8c358c243c6734e6cffdc0d4e3b7244ce42d5f8cfd4a365/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:11:15 np0005592157 podman[336126]: 2026-01-22 15:11:15.587526801 +0000 UTC m=+0.141938216 container init 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:11:15 np0005592157 podman[336126]: 2026-01-22 15:11:15.592676778 +0000 UTC m=+0.147088183 container start 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:11:15 np0005592157 podman[336126]: 2026-01-22 15:11:15.596341838 +0000 UTC m=+0.150753253 container attach 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:11:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]: {
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:        "osd_id": 0,
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:        "type": "bluestore"
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]:    }
Jan 22 10:11:16 np0005592157 vigilant_tharp[336143]: }
Jan 22 10:11:16 np0005592157 systemd[1]: libpod-4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86.scope: Deactivated successfully.
Jan 22 10:11:16 np0005592157 conmon[336143]: conmon 4ec7fdb9a9a2ac3456df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86.scope/container/memory.events
Jan 22 10:11:16 np0005592157 podman[336126]: 2026-01-22 15:11:16.398379949 +0000 UTC m=+0.952791374 container died 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:11:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-29fc0c5161af6283d8c358c243c6734e6cffdc0d4e3b7244ce42d5f8cfd4a365-merged.mount: Deactivated successfully.
Jan 22 10:11:16 np0005592157 podman[336126]: 2026-01-22 15:11:16.446370164 +0000 UTC m=+1.000781549 container remove 4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:11:16 np0005592157 systemd[1]: libpod-conmon-4ec7fdb9a9a2ac3456dfdcfad13baa3488b9815fe2d6b6ae4febd5eda4ecbf86.scope: Deactivated successfully.
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 06c7f34f-7634-4666-90be-75e07c37c599 does not exist
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e360a4b5-847d-4fd0-b432-d8f559e8879d does not exist
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d6a35e5f-d834-4b57-bacf-a7840a557cd0 does not exist
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:16.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:17 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:18 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:18.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:19 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:20.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:20 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:21 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:22.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:23 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:23 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:23.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:24.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:24 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:25.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:11:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:11:27 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:28 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:28.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:29.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:29 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:30.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:30 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:31.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:32 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:33 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:33.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:34 np0005592157 podman[336286]: 2026-01-22 15:11:34.330629308 +0000 UTC m=+0.059627163 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:11:34 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:34.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:35 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:35.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:36 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:37 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592157 ceph-mon[74359]: Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:38 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:38.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:40 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:41 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:41 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:41.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:42 np0005592157 podman[336310]: 2026-01-22 15:11:42.383248949 +0000 UTC m=+0.110758075 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:43 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:43.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:44 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:44.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:45 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:45.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:46 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:46 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:11:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:11:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:46.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:47 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:47 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:11:47
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'backups', 'images', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 22 10:11:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:47.647 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:47.647 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:11:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:47.648 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:11:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:48 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:49 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:49.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:50 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:50.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:51 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:51.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:52 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:11:53 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:53 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:54 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:11:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:11:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:11:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:11:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 10:11:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:55.455 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:11:55.457 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:11:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:56.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:11:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:11:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 19K writes, 88K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1897 writes, 9250 keys, 1897 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s#012Interval WAL: 1896 writes, 1896 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     65.8      1.56              0.44        62    0.025       0      0       0.0       0.0#012  L6      1/0   10.79 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.7    111.9     96.6      6.02              2.22        61    0.099    608K    33K       0.0       0.0#012 Sum      1/0   10.79 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.7     88.9     90.3      7.58              2.66       123    0.062    608K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    118.9    121.8      0.69              0.31        14    0.049    100K   4338       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    111.9     96.6      6.02              2.22        61    0.099    608K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     66.0      1.56              0.44        61    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.100, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.67 GB write, 0.11 MB/s write, 0.66 GB read, 0.11 MB/s read, 7.6 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 72.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000339 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3862,69.04 MB,22.7119%) FilterBlock(124,1.65 MB,0.541301%) IndexBlock(124,2.09 MB,0.686731%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:11:57 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 10:11:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:11:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:11:58 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:59 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:11:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:11:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:00 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:12:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:02 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:12:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:03.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:04.460 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:12:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0027915547064023822 of space, bias 1.0, pg target 0.8263001930951052 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:12:05 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 10:12:05 np0005592157 podman[336447]: 2026-01-22 15:12:05.36825888 +0000 UTC m=+0.085781059 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:12:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:05.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:06 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 10:12:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:07 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:07 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:09 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:09 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 10:12:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:10 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:10.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:11 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:11.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:13 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:13 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:13 np0005592157 podman[336470]: 2026-01-22 15:12:13.36571083 +0000 UTC m=+0.110181171 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:12:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:14 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 10:12:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 10:12:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:16 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:16 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:12:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:17.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fa118545-2a1b-41f1-8e00-8495b35628ec does not exist
Jan 22 10:12:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d4180e86-6b20-4dd5-a9ce-bd65cb1f6c65 does not exist
Jan 22 10:12:17 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0355f68c-fb5c-444c-a6d4-5ec922fba11c does not exist
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:12:17 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.592777534 +0000 UTC m=+0.059860389 container create abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:12:18 np0005592157 systemd[1]: Started libpod-conmon-abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6.scope.
Jan 22 10:12:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.56548901 +0000 UTC m=+0.032571865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.676845569 +0000 UTC m=+0.143928424 container init abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.685867972 +0000 UTC m=+0.152950807 container start abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.690204499 +0000 UTC m=+0.157287364 container attach abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:12:18 np0005592157 admiring_meitner[336785]: 167 167
Jan 22 10:12:18 np0005592157 systemd[1]: libpod-abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6.scope: Deactivated successfully.
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.692903056 +0000 UTC m=+0.159985891 container died abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:12:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0815e9c640ba1d7cee09ae717de69318780953fee75e251781da8d8b591491bd-merged.mount: Deactivated successfully.
Jan 22 10:12:18 np0005592157 podman[336770]: 2026-01-22 15:12:18.741088575 +0000 UTC m=+0.208171400 container remove abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:12:18 np0005592157 systemd[1]: libpod-conmon-abe48e3b64a728ea1f107c2177b0bc4166758cf8501811570450b4338a4022e6.scope: Deactivated successfully.
Jan 22 10:12:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:18 np0005592157 podman[336808]: 2026-01-22 15:12:18.919041919 +0000 UTC m=+0.042654584 container create 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:12:18 np0005592157 systemd[1]: Started libpod-conmon-81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965.scope.
Jan 22 10:12:18 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:12:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:18 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:12:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:18 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:18 np0005592157 podman[336808]: 2026-01-22 15:12:18.902564732 +0000 UTC m=+0.026177417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:19 np0005592157 podman[336808]: 2026-01-22 15:12:19.0061801 +0000 UTC m=+0.129792785 container init 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:12:19 np0005592157 podman[336808]: 2026-01-22 15:12:19.014658449 +0000 UTC m=+0.138271114 container start 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:12:19 np0005592157 podman[336808]: 2026-01-22 15:12:19.020159085 +0000 UTC m=+0.143771750 container attach 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:12:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:19 np0005592157 gracious_yonath[336824]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:12:19 np0005592157 gracious_yonath[336824]: --> relative data size: 1.0
Jan 22 10:12:19 np0005592157 gracious_yonath[336824]: --> All data devices are unavailable
Jan 22 10:12:19 np0005592157 systemd[1]: libpod-81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965.scope: Deactivated successfully.
Jan 22 10:12:19 np0005592157 podman[336808]: 2026-01-22 15:12:19.836104419 +0000 UTC m=+0.959717094 container died 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:12:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:20 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6f0df91448d69ca273b1657d6e6c2dac5b51e6b2ec2d12744007661f7b6207b7-merged.mount: Deactivated successfully.
Jan 22 10:12:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:21 np0005592157 podman[336808]: 2026-01-22 15:12:21.901158561 +0000 UTC m=+3.024771256 container remove 81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:12:21 np0005592157 systemd[1]: libpod-conmon-81b05905838f4ac95492de6eaa8061ee8342a4a751723dd6840fa47f1f5d8965.scope: Deactivated successfully.
Jan 22 10:12:22 np0005592157 podman[336996]: 2026-01-22 15:12:22.652836888 +0000 UTC m=+0.047445062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:22 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.023248203 +0000 UTC m=+0.417856317 container create 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:12:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:23 np0005592157 systemd[1]: Started libpod-conmon-33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56.scope.
Jan 22 10:12:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.758024313 +0000 UTC m=+1.152632497 container init 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.766464952 +0000 UTC m=+1.161073066 container start 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:12:23 np0005592157 friendly_ramanujan[337062]: 167 167
Jan 22 10:12:23 np0005592157 systemd[1]: libpod-33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56.scope: Deactivated successfully.
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.789018499 +0000 UTC m=+1.183626613 container attach 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.790403083 +0000 UTC m=+1.185011237 container died 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:12:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-859f28b46884bf36dfdbfde07dcbc9c7672c1836b666db6f2e03aae4d89b666a-merged.mount: Deactivated successfully.
Jan 22 10:12:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:23 np0005592157 podman[336996]: 2026-01-22 15:12:23.880505047 +0000 UTC m=+1.275113111 container remove 33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 10:12:23 np0005592157 systemd[1]: libpod-conmon-33151b06ffd8ef90e9b7c6ca6284bc0acb18966aaee8f92600393a2e361b6d56.scope: Deactivated successfully.
Jan 22 10:12:24 np0005592157 podman[337087]: 2026-01-22 15:12:24.105039921 +0000 UTC m=+0.049363370 container create 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:12:24 np0005592157 systemd[1]: Started libpod-conmon-5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99.scope.
Jan 22 10:12:24 np0005592157 podman[337087]: 2026-01-22 15:12:24.082868183 +0000 UTC m=+0.027191642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7266b63e4f31640867c1cc39b3dc6d5bc8b2ccc3a5d27b3ec74e5ba0f879ccbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7266b63e4f31640867c1cc39b3dc6d5bc8b2ccc3a5d27b3ec74e5ba0f879ccbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7266b63e4f31640867c1cc39b3dc6d5bc8b2ccc3a5d27b3ec74e5ba0f879ccbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:24 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7266b63e4f31640867c1cc39b3dc6d5bc8b2ccc3a5d27b3ec74e5ba0f879ccbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:24 np0005592157 podman[337087]: 2026-01-22 15:12:24.22045353 +0000 UTC m=+0.164776939 container init 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 22 10:12:24 np0005592157 podman[337087]: 2026-01-22 15:12:24.232816855 +0000 UTC m=+0.177140274 container start 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:12:24 np0005592157 podman[337087]: 2026-01-22 15:12:24.236882126 +0000 UTC m=+0.181205535 container attach 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:12:24 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592157 ceph-mon[74359]: Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:24.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]: {
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:    "0": [
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:        {
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "devices": [
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "/dev/loop3"
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            ],
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "lv_name": "ceph_lv0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "lv_size": "7511998464",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "name": "ceph_lv0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "tags": {
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.cluster_name": "ceph",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.crush_device_class": "",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.encrypted": "0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.osd_id": "0",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.type": "block",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:                "ceph.vdo": "0"
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            },
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "type": "block",
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:            "vg_name": "ceph_vg0"
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:        }
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]:    ]
Jan 22 10:12:24 np0005592157 loving_archimedes[337104]: }
Jan 22 10:12:25 np0005592157 systemd[1]: libpod-5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99.scope: Deactivated successfully.
Jan 22 10:12:25 np0005592157 podman[337087]: 2026-01-22 15:12:25.03338305 +0000 UTC m=+0.977706489 container died 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:12:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7266b63e4f31640867c1cc39b3dc6d5bc8b2ccc3a5d27b3ec74e5ba0f879ccbc-merged.mount: Deactivated successfully.
Jan 22 10:12:25 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:25 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:26 np0005592157 podman[337087]: 2026-01-22 15:12:26.504104148 +0000 UTC m=+2.448427567 container remove 5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 22 10:12:26 np0005592157 systemd[1]: libpod-conmon-5a3667490605b094c6868bedc6393a5b6b09bbe9c09b9fb34f8180d05a3d3a99.scope: Deactivated successfully.
Jan 22 10:12:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:27 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.337906603 +0000 UTC m=+0.055153712 container create ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 22 10:12:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:27 np0005592157 systemd[1]: Started libpod-conmon-ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc.scope.
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.311594994 +0000 UTC m=+0.028842193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.427494725 +0000 UTC m=+0.144741844 container init ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.440051295 +0000 UTC m=+0.157298454 container start ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:12:27 np0005592157 quizzical_engelbart[337285]: 167 167
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.444585597 +0000 UTC m=+0.161832756 container attach ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:12:27 np0005592157 systemd[1]: libpod-ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc.scope: Deactivated successfully.
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.445761856 +0000 UTC m=+0.163008965 container died ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:12:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-95b0ed9ac6eb844ebf0adb653bd800f7a39d31c366c1d216b5d331bd4abd153b-merged.mount: Deactivated successfully.
Jan 22 10:12:27 np0005592157 podman[337268]: 2026-01-22 15:12:27.504360883 +0000 UTC m=+0.221608002 container remove ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_engelbart, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:12:27 np0005592157 systemd[1]: libpod-conmon-ef4b90a6cb30b8cefdc084d767cbc3d02b702aefb3047fcaceffe0e0643f9fcc.scope: Deactivated successfully.
Jan 22 10:12:27 np0005592157 podman[337309]: 2026-01-22 15:12:27.733128081 +0000 UTC m=+0.063725045 container create 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:12:27 np0005592157 systemd[1]: Started libpod-conmon-3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99.scope.
Jan 22 10:12:27 np0005592157 podman[337309]: 2026-01-22 15:12:27.701981162 +0000 UTC m=+0.032578196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:12:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:12:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b5465ebe47eab3b8892a1f6cdf448a48e8358931bbc62fd7c64fb358df6a40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b5465ebe47eab3b8892a1f6cdf448a48e8358931bbc62fd7c64fb358df6a40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b5465ebe47eab3b8892a1f6cdf448a48e8358931bbc62fd7c64fb358df6a40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b5465ebe47eab3b8892a1f6cdf448a48e8358931bbc62fd7c64fb358df6a40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:12:27 np0005592157 podman[337309]: 2026-01-22 15:12:27.851985795 +0000 UTC m=+0.182582749 container init 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:12:27 np0005592157 podman[337309]: 2026-01-22 15:12:27.865702484 +0000 UTC m=+0.196299408 container start 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:12:27 np0005592157 podman[337309]: 2026-01-22 15:12:27.869092267 +0000 UTC m=+0.199689191 container attach 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:12:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:27.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:28 np0005592157 ceph-mon[74359]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:28 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]: {
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:        "osd_id": 0,
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:        "type": "bluestore"
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]:    }
Jan 22 10:12:28 np0005592157 confident_heisenberg[337326]: }
Jan 22 10:12:28 np0005592157 systemd[1]: libpod-3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99.scope: Deactivated successfully.
Jan 22 10:12:28 np0005592157 podman[337309]: 2026-01-22 15:12:28.72385629 +0000 UTC m=+1.054453224 container died 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:12:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:29.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b2b5465ebe47eab3b8892a1f6cdf448a48e8358931bbc62fd7c64fb358df6a40-merged.mount: Deactivated successfully.
Jan 22 10:12:29 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:31 np0005592157 podman[337309]: 2026-01-22 15:12:31.817639709 +0000 UTC m=+4.148236673 container remove 3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:12:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:12:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:31.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:31 np0005592157 systemd[1]: libpod-conmon-3f812d620c8e7d7ab9c2a7b9aaacbbb4396373917175542bc046971554e06f99.scope: Deactivated successfully.
Jan 22 10:12:32 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:12:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d393e4c6-06dc-4d12-aa08-45729501c5bc does not exist
Jan 22 10:12:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1017116a-ba86-427a-b7a8-e316a1f522f4 does not exist
Jan 22 10:12:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 702f016a-a0a1-4480-ac27-ade5ded2dcc1 does not exist
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:34 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:34.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:35.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:35 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:36 np0005592157 podman[337415]: 2026-01-22 15:12:36.363278249 +0000 UTC m=+0.096929674 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:12:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:36.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:37 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:37 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:37.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:38 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:39.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:40 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:41 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:42 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:42.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:43 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:43 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:44 np0005592157 podman[337490]: 2026-01-22 15:12:44.377734231 +0000 UTC m=+0.104193243 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:12:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:44.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:44 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:46 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:12:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:12:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:46.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:47 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:12:47
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'vms']
Jan 22 10:12:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:47.649 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:47.650 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:12:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:47.650 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:12:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:48 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:49 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:50 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:51 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:53 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:53 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:54.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:55 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:12:56 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:57.170 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:12:57 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:12:57.171 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:12:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:58 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:58 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:12:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:58.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:12:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:12:59 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:59 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:12:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:12:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:59.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:00.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:01 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:02 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:02.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:03 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:13:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:13:04 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:04.174 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:13:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0027915547064023822 of space, bias 1.0, pg target 0.8263001930951052 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:13:05 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:05.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:06 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:07 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:07 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:07 np0005592157 podman[337577]: 2026-01-22 15:13:07.347597586 +0000 UTC m=+0.078490578 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 22 10:13:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:07.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:08 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:08.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:09 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:10 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:11.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:12 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:12.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:13 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:14 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:14.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:15 np0005592157 podman[337600]: 2026-01-22 15:13:15.371321774 +0000 UTC m=+0.098870211 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 10:13:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:15 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:17 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:17 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:17.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:18 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:18.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:19 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:19.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:20 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:21.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.096758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802096869, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 2379, "num_deletes": 544, "total_data_size": 3190796, "memory_usage": 3255712, "flush_reason": "Manual Compaction"}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802131180, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 3125853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87741, "largest_seqno": 90119, "table_properties": {"data_size": 3116184, "index_size": 5202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30539, "raw_average_key_size": 22, "raw_value_size": 3092778, "raw_average_value_size": 2316, "num_data_blocks": 224, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 544, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094628, "oldest_key_time": 1769094628, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 34470 microseconds, and 13918 cpu microseconds.
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.131232) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 3125853 bytes OK
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.131248) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.133485) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.133495) EVENT_LOG_v1 {"time_micros": 1769094802133492, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.133509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 3179575, prev total WAL file size 3179575, number of live WAL files 2.
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.134386) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353330' seq:72057594037927935, type:22 .. '6C6F676D0034373834' seq:0, type:0; will stop at (end)
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(3052KB)], [200(10MB)]
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802134418, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 14437000, "oldest_snapshot_seqno": -1}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 14253 keys, 14224265 bytes, temperature: kUnknown
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802229268, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 14224265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14144419, "index_size": 43125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 391366, "raw_average_key_size": 27, "raw_value_size": 13899978, "raw_average_value_size": 975, "num_data_blocks": 1579, "num_entries": 14253, "num_filter_entries": 14253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.229616) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 14224265 bytes
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.231309) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 149.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 15354, records dropped: 1101 output_compression: NoCompression
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.231340) EVENT_LOG_v1 {"time_micros": 1769094802231325, "job": 126, "event": "compaction_finished", "compaction_time_micros": 94950, "compaction_time_cpu_micros": 31555, "output_level": 6, "num_output_files": 1, "total_output_size": 14224265, "num_input_records": 15354, "num_output_records": 14253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802232495, "job": 126, "event": "table_file_deletion", "file_number": 202}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802236333, "job": 126, "event": "table_file_deletion", "file_number": 200}
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.134315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.236415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.236423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.236425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.236426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:22.236428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:23 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:23 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:23.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:24 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:24.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:25 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:25.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:26 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:27 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:27 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:27.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:28 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:29 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:30 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:30.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:31 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:32 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:33 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bbfffc08-8213-4f58-aff6-7fa6edeb8f7d does not exist
Jan 22 10:13:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d4120d06-1eca-4190-a3bd-6c133f82a3b4 does not exist
Jan 22 10:13:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 55c2f6bc-33c2-4f3e-b3c5-9d92ac0b7632 does not exist
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.667550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814667577, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 418, "num_deletes": 274, "total_data_size": 228491, "memory_usage": 237544, "flush_reason": "Manual Compaction"}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814671293, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90121, "largest_seqno": 90537, "table_properties": {"data_size": 222373, "index_size": 389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6751, "raw_average_key_size": 19, "raw_value_size": 217350, "raw_average_value_size": 635, "num_data_blocks": 17, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094802, "oldest_key_time": 1769094802, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 3806 microseconds, and 1149 cpu microseconds.
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.671348) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 224727 bytes OK
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.671372) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675096) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675120) EVENT_LOG_v1 {"time_micros": 1769094814675113, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675141) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 225783, prev total WAL file size 225783, number of live WAL files 2.
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675528) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(219KB)], [203(13MB)]
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814675563, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 14448992, "oldest_snapshot_seqno": -1}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 14037 keys, 12781384 bytes, temperature: kUnknown
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814765700, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 12781384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12703900, "index_size": 41275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 387453, "raw_average_key_size": 27, "raw_value_size": 12463860, "raw_average_value_size": 887, "num_data_blocks": 1497, "num_entries": 14037, "num_filter_entries": 14037, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.765947) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12781384 bytes
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.767403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(121.2) write-amplify(56.9) OK, records in: 14595, records dropped: 558 output_compression: NoCompression
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.767419) EVENT_LOG_v1 {"time_micros": 1769094814767412, "job": 128, "event": "compaction_finished", "compaction_time_micros": 90207, "compaction_time_cpu_micros": 29521, "output_level": 6, "num_output_files": 1, "total_output_size": 12781384, "num_input_records": 14595, "num_output_records": 14037, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814767548, "job": 128, "event": "table_file_deletion", "file_number": 205}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814769825, "job": 128, "event": "table_file_deletion", "file_number": 203}
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.769909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.769918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.769923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.769964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:13:34.769968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:13:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.203857536 +0000 UTC m=+0.043069414 container create 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:13:35 np0005592157 systemd[1]: Started libpod-conmon-1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3.scope.
Jan 22 10:13:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.184472598 +0000 UTC m=+0.023684506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.293694304 +0000 UTC m=+0.132906252 container init 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.305746422 +0000 UTC m=+0.144958340 container start 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.310023457 +0000 UTC m=+0.149235415 container attach 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:13:35 np0005592157 lucid_ardinghelli[337972]: 167 167
Jan 22 10:13:35 np0005592157 systemd[1]: libpod-1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3.scope: Deactivated successfully.
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.311206577 +0000 UTC m=+0.150418475 container died 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:13:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3110e63cce034e0649eae3712a863be799f6d9a4c05585056a6456516b66ffd2-merged.mount: Deactivated successfully.
Jan 22 10:13:35 np0005592157 podman[337956]: 2026-01-22 15:13:35.361723664 +0000 UTC m=+0.200935552 container remove 1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:13:35 np0005592157 systemd[1]: libpod-conmon-1619835dae7c14863508be7dea915fb3362458e5b1e4b0ea9beea49d9b7880d3.scope: Deactivated successfully.
Jan 22 10:13:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:35 np0005592157 podman[337994]: 2026-01-22 15:13:35.576450905 +0000 UTC m=+0.056250920 container create d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:13:35 np0005592157 podman[337994]: 2026-01-22 15:13:35.544798974 +0000 UTC m=+0.024599009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:35 np0005592157 systemd[1]: Started libpod-conmon-d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf.scope.
Jan 22 10:13:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:35 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:35 np0005592157 podman[337994]: 2026-01-22 15:13:35.676476684 +0000 UTC m=+0.156276679 container init d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:13:35 np0005592157 podman[337994]: 2026-01-22 15:13:35.686452681 +0000 UTC m=+0.166252666 container start d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:13:35 np0005592157 podman[337994]: 2026-01-22 15:13:35.690953172 +0000 UTC m=+0.170753177 container attach d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:13:35 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:35.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:36 np0005592157 quirky_austin[338011]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:13:36 np0005592157 quirky_austin[338011]: --> relative data size: 1.0
Jan 22 10:13:36 np0005592157 quirky_austin[338011]: --> All data devices are unavailable
Jan 22 10:13:36 np0005592157 systemd[1]: libpod-d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf.scope: Deactivated successfully.
Jan 22 10:13:36 np0005592157 podman[337994]: 2026-01-22 15:13:36.468355474 +0000 UTC m=+0.948155449 container died d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:13:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-38493dd389be29222e6645ea691ba24a71c361eed1e13f6cae05f4635b251f62-merged.mount: Deactivated successfully.
Jan 22 10:13:36 np0005592157 podman[337994]: 2026-01-22 15:13:36.526245224 +0000 UTC m=+1.006045199 container remove d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:13:36 np0005592157 systemd[1]: libpod-conmon-d4116231a471e5313422109b3b6d8627b71267c498f10791249d7ccd2d7a1bdf.scope: Deactivated successfully.
Jan 22 10:13:36 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:36.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.377031907 +0000 UTC m=+0.054889906 container create d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:13:37 np0005592157 systemd[1]: Started libpod-conmon-d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d.scope.
Jan 22 10:13:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.350186024 +0000 UTC m=+0.028044153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.448546993 +0000 UTC m=+0.126405032 container init d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.461879592 +0000 UTC m=+0.139737631 container start d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:13:37 np0005592157 vigilant_swirles[338194]: 167 167
Jan 22 10:13:37 np0005592157 systemd[1]: libpod-d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d.scope: Deactivated successfully.
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.466791873 +0000 UTC m=+0.144649872 container attach d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.467530201 +0000 UTC m=+0.145388200 container died d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 22 10:13:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6200f2fc03ce4558fd9d06a8ed3ea356f93e2a6b6abbac40c5b3f64d5b1eb53a-merged.mount: Deactivated successfully.
Jan 22 10:13:37 np0005592157 podman[338177]: 2026-01-22 15:13:37.506960525 +0000 UTC m=+0.184818524 container remove d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_swirles, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:13:37 np0005592157 systemd[1]: libpod-conmon-d217f41e1047e3a34d2828271afb454d8da6a537fb560e4e379e36703c8f4d5d.scope: Deactivated successfully.
Jan 22 10:13:37 np0005592157 podman[338191]: 2026-01-22 15:13:37.524971329 +0000 UTC m=+0.105186728 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:13:37 np0005592157 podman[338237]: 2026-01-22 15:13:37.677004033 +0000 UTC m=+0.056802044 container create 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:13:37 np0005592157 systemd[1]: Started libpod-conmon-7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac.scope.
Jan 22 10:13:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcc738e0d9f7a7fc815ae0209681bd8a8c9948c0c91160a7d879c5e51835c529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcc738e0d9f7a7fc815ae0209681bd8a8c9948c0c91160a7d879c5e51835c529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcc738e0d9f7a7fc815ae0209681bd8a8c9948c0c91160a7d879c5e51835c529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcc738e0d9f7a7fc815ae0209681bd8a8c9948c0c91160a7d879c5e51835c529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:37 np0005592157 podman[338237]: 2026-01-22 15:13:37.659213963 +0000 UTC m=+0.039011994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:37 np0005592157 podman[338237]: 2026-01-22 15:13:37.762727679 +0000 UTC m=+0.142525710 container init 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:13:37 np0005592157 podman[338237]: 2026-01-22 15:13:37.768057751 +0000 UTC m=+0.147855772 container start 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:13:37 np0005592157 podman[338237]: 2026-01-22 15:13:37.789699955 +0000 UTC m=+0.169497996 container attach 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:13:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:37.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:38 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]: {
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:    "0": [
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:        {
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "devices": [
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "/dev/loop3"
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            ],
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "lv_name": "ceph_lv0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "lv_size": "7511998464",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "name": "ceph_lv0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "tags": {
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.cluster_name": "ceph",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.crush_device_class": "",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.encrypted": "0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.osd_id": "0",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.type": "block",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:                "ceph.vdo": "0"
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            },
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "type": "block",
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:            "vg_name": "ceph_vg0"
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:        }
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]:    ]
Jan 22 10:13:38 np0005592157 unruffled_bohr[338254]: }
Jan 22 10:13:38 np0005592157 systemd[1]: libpod-7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac.scope: Deactivated successfully.
Jan 22 10:13:38 np0005592157 podman[338237]: 2026-01-22 15:13:38.556245179 +0000 UTC m=+0.936043220 container died 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:13:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dcc738e0d9f7a7fc815ae0209681bd8a8c9948c0c91160a7d879c5e51835c529-merged.mount: Deactivated successfully.
Jan 22 10:13:38 np0005592157 podman[338237]: 2026-01-22 15:13:38.632501432 +0000 UTC m=+1.012299453 container remove 7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bohr, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:13:38 np0005592157 systemd[1]: libpod-conmon-7b127ef175b466d8aee80ec95575818065b7b9db8e328c4f6225e0df3a847bac.scope: Deactivated successfully.
Jan 22 10:13:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:39 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.39351556 +0000 UTC m=+0.047687118 container create 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:13:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:39 np0005592157 systemd[1]: Started libpod-conmon-70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7.scope.
Jan 22 10:13:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.375542356 +0000 UTC m=+0.029713934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.479038102 +0000 UTC m=+0.133209700 container init 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.486055265 +0000 UTC m=+0.140226833 container start 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.489716575 +0000 UTC m=+0.143888143 container attach 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:13:39 np0005592157 agitated_sutherland[338435]: 167 167
Jan 22 10:13:39 np0005592157 systemd[1]: libpod-70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7.scope: Deactivated successfully.
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.492258698 +0000 UTC m=+0.146430256 container died 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:13:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a7c3cca0e6906b06adfcf8562fa644ac7b23c02530da602533c1ae82684aa949-merged.mount: Deactivated successfully.
Jan 22 10:13:39 np0005592157 podman[338419]: 2026-01-22 15:13:39.541241027 +0000 UTC m=+0.195412615 container remove 70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:13:39 np0005592157 systemd[1]: libpod-conmon-70094de8f3e90dcff6744ba11a00faaa9d611b4e5f04f50e2352dfef1dcf88f7.scope: Deactivated successfully.
Jan 22 10:13:39 np0005592157 podman[338463]: 2026-01-22 15:13:39.776473005 +0000 UTC m=+0.053433571 container create 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:13:39 np0005592157 systemd[1]: Started libpod-conmon-97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5.scope.
Jan 22 10:13:39 np0005592157 podman[338463]: 2026-01-22 15:13:39.751230501 +0000 UTC m=+0.028191107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:13:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:13:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73aec92b9cce4856397e4236241a17d682c4b7fe0de99e3618ab8d980b8adb50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73aec92b9cce4856397e4236241a17d682c4b7fe0de99e3618ab8d980b8adb50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73aec92b9cce4856397e4236241a17d682c4b7fe0de99e3618ab8d980b8adb50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73aec92b9cce4856397e4236241a17d682c4b7fe0de99e3618ab8d980b8adb50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:13:39 np0005592157 podman[338463]: 2026-01-22 15:13:39.865032161 +0000 UTC m=+0.141992767 container init 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 22 10:13:39 np0005592157 podman[338463]: 2026-01-22 15:13:39.880047522 +0000 UTC m=+0.157008088 container start 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:13:39 np0005592157 podman[338463]: 2026-01-22 15:13:39.884069471 +0000 UTC m=+0.161030057 container attach 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:13:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:13:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:39.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:13:40 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]: {
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:        "osd_id": 0,
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:        "type": "bluestore"
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]:    }
Jan 22 10:13:40 np0005592157 infallible_shannon[338479]: }
Jan 22 10:13:40 np0005592157 systemd[1]: libpod-97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5.scope: Deactivated successfully.
Jan 22 10:13:40 np0005592157 podman[338463]: 2026-01-22 15:13:40.720056759 +0000 UTC m=+0.997017315 container died 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:13:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-73aec92b9cce4856397e4236241a17d682c4b7fe0de99e3618ab8d980b8adb50-merged.mount: Deactivated successfully.
Jan 22 10:13:40 np0005592157 podman[338463]: 2026-01-22 15:13:40.786781156 +0000 UTC m=+1.063741712 container remove 97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:13:40 np0005592157 systemd[1]: libpod-conmon-97cb17fa77a540d8c439a02c4f01269a12e58ddba99def4d2e9eccf19e9cdbe5.scope: Deactivated successfully.
Jan 22 10:13:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:13:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:13:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 248f5f21-c7cb-48f0-a5d1-503937af1bc2 does not exist
Jan 22 10:13:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d29f5e72-3f3d-4017-9df9-d79f5c4cb520 does not exist
Jan 22 10:13:40 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 18d14b56-a15e-429c-9dc3-7d4aa54e6053 does not exist
Jan 22 10:13:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:41 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:41 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:41.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:42 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:43 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:43 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:43.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:44 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:45 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:46 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:46 np0005592157 podman[338615]: 2026-01-22 15:13:46.398767875 +0000 UTC m=+0.124476734 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:13:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:13:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:47 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:13:47
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'vms', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Jan 22 10:13:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:47.649 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:47.650 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:13:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:47.650 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:13:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:47.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:48 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:49 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:50 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:50.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:51 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:51 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:52 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:52 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:52.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:13:53 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:54 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:54.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:13:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:56.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:56 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:56.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:13:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:58.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:58 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:58.307 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:13:58 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:13:58.309 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:13:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:13:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:13:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:58.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:13:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:14:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:00.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:14:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:01 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:02 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:02 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:02.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 10:14:03 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:14:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:04.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:14:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:04.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:05 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028035504490042805 of space, bias 1.0, pg target 0.829850932905267 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:14:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 10:14:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:06.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:06 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:06.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:07 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:07 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:07 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:14:07.312 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:14:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 10:14:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:08.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:08 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:08 np0005592157 podman[338703]: 2026-01-22 15:14:08.368616743 +0000 UTC m=+0.094231817 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:14:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:09 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:14:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:10.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:11.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:11 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:11 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 10:14:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:13.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 10:14:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:13 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:14.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:14 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 KiB/s wr, 36 op/s
Jan 22 10:14:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:16.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:16 np0005592157 ceph-mon[74359]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:17 np0005592157 podman[338726]: 2026-01-22 15:14:17.377540764 +0000 UTC m=+0.116903647 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Jan 22 10:14:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 21 op/s
Jan 22 10:14:17 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:18.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:19 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 864 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 997 KiB/s wr, 35 op/s
Jan 22 10:14:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:20 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:21 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 22 10:14:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:22.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592157 ceph-mon[74359]: Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:23.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 10:14:23 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:24.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:24 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:25.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Jan 22 10:14:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:26.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:26 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:27.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 22 10:14:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:28.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:29.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 10:14:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:30 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:31.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 795 KiB/s wr, 17 op/s
Jan 22 10:14:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:14:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 16K writes, 53K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5472 syncs, 3.06 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 920 writes, 2022 keys, 920 commit groups, 1.0 writes per commit group, ingest: 0.68 MB, 0.00 MB/s#012Interval WAL: 920 writes, 435 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Jan 22 10:14:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:33.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:33 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:33 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:14:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:34 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:14:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:14:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:14:36 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 596 B/s wr, 4 op/s
Jan 22 10:14:37 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:38 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:39 np0005592157 podman[338814]: 2026-01-22 15:14:39.35417522 +0000 UTC m=+0.083089563 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:14:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 22 10:14:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:40 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:41.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:41 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a92fe4de-e121-47c0-8677-d265db80837b does not exist
Jan 22 10:14:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7cccab85-b0b4-40cb-a7c3-155d93e61159 does not exist
Jan 22 10:14:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a4520424-eefa-4354-a25d-bc78502ff39f does not exist
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:14:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:14:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:43.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.440132603 +0000 UTC m=+0.048717724 container create faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:14:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:43 np0005592157 systemd[1]: Started libpod-conmon-faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494.scope.
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.413039644 +0000 UTC m=+0.021624755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.536185484 +0000 UTC m=+0.144770675 container init faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.548714383 +0000 UTC m=+0.157299494 container start faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.552501297 +0000 UTC m=+0.161086428 container attach faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:14:43 np0005592157 determined_snyder[339122]: 167 167
Jan 22 10:14:43 np0005592157 systemd[1]: libpod-faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494.scope: Deactivated successfully.
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.55749663 +0000 UTC m=+0.166081751 container died faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:14:43 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c85d1dd79a3ec44d12a595489d594e4e65a5cc82fc9bee1154fe88061807674c-merged.mount: Deactivated successfully.
Jan 22 10:14:43 np0005592157 podman[339106]: 2026-01-22 15:14:43.610813847 +0000 UTC m=+0.219399008 container remove faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:14:43 np0005592157 systemd[1]: libpod-conmon-faedbf9f93399f26cee1258b119f2ac61cd8036992970f7305ec3d9fc5222494.scope: Deactivated successfully.
Jan 22 10:14:43 np0005592157 podman[339148]: 2026-01-22 15:14:43.821698313 +0000 UTC m=+0.056911976 container create 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:14:43 np0005592157 systemd[1]: Started libpod-conmon-252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9.scope.
Jan 22 10:14:43 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:43 np0005592157 podman[339148]: 2026-01-22 15:14:43.803732669 +0000 UTC m=+0.038946342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:43 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:43 np0005592157 podman[339148]: 2026-01-22 15:14:43.916584146 +0000 UTC m=+0.151797819 container init 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:14:43 np0005592157 podman[339148]: 2026-01-22 15:14:43.924308166 +0000 UTC m=+0.159521849 container start 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:14:43 np0005592157 podman[339148]: 2026-01-22 15:14:43.929084714 +0000 UTC m=+0.164298397 container attach 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:14:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:44 np0005592157 competent_shamir[339164]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:14:44 np0005592157 competent_shamir[339164]: --> relative data size: 1.0
Jan 22 10:14:44 np0005592157 competent_shamir[339164]: --> All data devices are unavailable
Jan 22 10:14:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:14:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:14:44 np0005592157 systemd[1]: libpod-252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9.scope: Deactivated successfully.
Jan 22 10:14:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 10:14:44 np0005592157 podman[339230]: 2026-01-22 15:14:44.797600405 +0000 UTC m=+0.033217991 container died 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 10:14:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-54000d4ffcbbe271eb63496885e08fe77b6421bbafb53f97c2bbb85a358b0d74-merged.mount: Deactivated successfully.
Jan 22 10:14:44 np0005592157 podman[339230]: 2026-01-22 15:14:44.85004267 +0000 UTC m=+0.085660236 container remove 252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 22 10:14:44 np0005592157 systemd[1]: libpod-conmon-252cd7dc3f9ce97a8f5b395f9e4c003a78893af48b763273488ead16631d28d9.scope: Deactivated successfully.
Jan 22 10:14:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:45.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.636956767 +0000 UTC m=+0.042000058 container create 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:14:45 np0005592157 systemd[1]: Started libpod-conmon-2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806.scope.
Jan 22 10:14:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.620125932 +0000 UTC m=+0.025169233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.72413169 +0000 UTC m=+0.129175001 container init 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.731742997 +0000 UTC m=+0.136786278 container start 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:14:45 np0005592157 clever_neumann[339403]: 167 167
Jan 22 10:14:45 np0005592157 systemd[1]: libpod-2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806.scope: Deactivated successfully.
Jan 22 10:14:45 np0005592157 conmon[339403]: conmon 2f1408bf5043014d3b00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806.scope/container/memory.events
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.736172727 +0000 UTC m=+0.141216038 container attach 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.736744611 +0000 UTC m=+0.141787892 container died 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:14:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:45 np0005592157 ceph-mon[74359]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bca4b50fe497ab1575e1074bcdadb8787a6e915f74b47e1e3aa634a4c7e270b3-merged.mount: Deactivated successfully.
Jan 22 10:14:45 np0005592157 podman[339387]: 2026-01-22 15:14:45.770701869 +0000 UTC m=+0.175745170 container remove 2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 10:14:45 np0005592157 systemd[1]: libpod-conmon-2f1408bf5043014d3b00041cb99514565b332a67f50f5f536f2b75fff6ed5806.scope: Deactivated successfully.
Jan 22 10:14:45 np0005592157 podman[339427]: 2026-01-22 15:14:45.94529022 +0000 UTC m=+0.061550571 container create 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:14:45 np0005592157 systemd[1]: Started libpod-conmon-8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610.scope.
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:45.920231471 +0000 UTC m=+0.036491862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fe5d3e1baec98d60a1a93ecd9b8eb744fe22d15488e32ab725bed5d0a6be57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fe5d3e1baec98d60a1a93ecd9b8eb744fe22d15488e32ab725bed5d0a6be57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fe5d3e1baec98d60a1a93ecd9b8eb744fe22d15488e32ab725bed5d0a6be57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fe5d3e1baec98d60a1a93ecd9b8eb744fe22d15488e32ab725bed5d0a6be57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:46.051576014 +0000 UTC m=+0.167836445 container init 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:46.063726363 +0000 UTC m=+0.179986754 container start 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:46.068490611 +0000 UTC m=+0.184751012 container attach 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:14:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:14:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:14:46 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]: {
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:    "0": [
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:        {
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "devices": [
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "/dev/loop3"
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            ],
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "lv_name": "ceph_lv0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "lv_size": "7511998464",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "name": "ceph_lv0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "tags": {
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.cluster_name": "ceph",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.crush_device_class": "",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.encrypted": "0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.osd_id": "0",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.type": "block",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:                "ceph.vdo": "0"
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            },
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "type": "block",
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:            "vg_name": "ceph_vg0"
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:        }
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]:    ]
Jan 22 10:14:46 np0005592157 interesting_franklin[339444]: }
Jan 22 10:14:46 np0005592157 systemd[1]: libpod-8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610.scope: Deactivated successfully.
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:46.839530707 +0000 UTC m=+0.955791088 container died 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:14:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c9fe5d3e1baec98d60a1a93ecd9b8eb744fe22d15488e32ab725bed5d0a6be57-merged.mount: Deactivated successfully.
Jan 22 10:14:46 np0005592157 podman[339427]: 2026-01-22 15:14:46.926578806 +0000 UTC m=+1.042839187 container remove 8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:14:46 np0005592157 systemd[1]: libpod-conmon-8470c62ab9a909d83b12aed0b9a5b7c69cabbdabe9e81ed82815b9c1b49fb610.scope: Deactivated successfully.
Jan 22 10:14:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.198515) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887198577, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1140, "num_deletes": 369, "total_data_size": 1303018, "memory_usage": 1324344, "flush_reason": "Manual Compaction"}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887209156, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 847476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90538, "largest_seqno": 91677, "table_properties": {"data_size": 843078, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15841, "raw_average_key_size": 23, "raw_value_size": 832092, "raw_average_value_size": 1209, "num_data_blocks": 69, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 369, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094814, "oldest_key_time": 1769094814, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 10702 microseconds, and 6353 cpu microseconds.
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209217) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 847476 bytes OK
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209236) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.212211) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.212229) EVENT_LOG_v1 {"time_micros": 1769094887212224, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.212246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1297141, prev total WAL file size 1297141, number of live WAL files 2.
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.213012) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(827KB)], [206(12MB)]
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887213076, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 13628860, "oldest_snapshot_seqno": -1}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 14002 keys, 10131843 bytes, temperature: kUnknown
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887314260, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 10131843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10058427, "index_size": 37335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 386524, "raw_average_key_size": 27, "raw_value_size": 9822992, "raw_average_value_size": 701, "num_data_blocks": 1335, "num_entries": 14002, "num_filter_entries": 14002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.314565) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10131843 bytes
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.316096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 100.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(28.0) write-amplify(12.0) OK, records in: 14725, records dropped: 723 output_compression: NoCompression
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.316116) EVENT_LOG_v1 {"time_micros": 1769094887316107, "job": 130, "event": "compaction_finished", "compaction_time_micros": 101255, "compaction_time_cpu_micros": 38092, "output_level": 6, "num_output_files": 1, "total_output_size": 10131843, "num_input_records": 14725, "num_output_records": 14002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887316409, "job": 130, "event": "table_file_deletion", "file_number": 208}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887319016, "job": 130, "event": "table_file_deletion", "file_number": 206}
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.212837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.319062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.319068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.319070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.319072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:14:47.319074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:14:47
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images']
Jan 22 10:14:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.630113705 +0000 UTC m=+0.043265280 container create 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 10:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:14:47.650 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:14:47.651 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:14:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:14:47.651 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:14:47 np0005592157 systemd[1]: Started libpod-conmon-229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955.scope.
Jan 22 10:14:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.704119792 +0000 UTC m=+0.117271397 container init 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.612323395 +0000 UTC m=+0.025475000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.712751275 +0000 UTC m=+0.125902880 container start 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:14:47 np0005592157 beautiful_joliot[339623]: 167 167
Jan 22 10:14:47 np0005592157 systemd[1]: libpod-229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955.scope: Deactivated successfully.
Jan 22 10:14:47 np0005592157 conmon[339623]: conmon 229ce3c997657c3d3846 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955.scope/container/memory.events
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.717700457 +0000 UTC m=+0.130852052 container attach 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.718335333 +0000 UTC m=+0.131486898 container died 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:14:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-844ae22b6adacc63a09fcce5940ee76de4bfc1f58c950f62c093d1fab9fa9f17-merged.mount: Deactivated successfully.
Jan 22 10:14:47 np0005592157 podman[339606]: 2026-01-22 15:14:47.758374071 +0000 UTC m=+0.171525676 container remove 229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592157 ceph-mon[74359]: Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:47 np0005592157 systemd[1]: libpod-conmon-229ce3c997657c3d3846a882789e391a613de0192444b1363ae099110ef1f955.scope: Deactivated successfully.
Jan 22 10:14:47 np0005592157 podman[339620]: 2026-01-22 15:14:47.816787313 +0000 UTC m=+0.143201096 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 10:14:47 np0005592157 podman[339671]: 2026-01-22 15:14:47.937562235 +0000 UTC m=+0.039559668 container create a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:14:47 np0005592157 systemd[1]: Started libpod-conmon-a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53.scope.
Jan 22 10:14:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:14:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8ddc87dd6c5eaa9306e0ef8b39e1fd697d932ff1ebe38f47c1b8c2476f1ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8ddc87dd6c5eaa9306e0ef8b39e1fd697d932ff1ebe38f47c1b8c2476f1ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8ddc87dd6c5eaa9306e0ef8b39e1fd697d932ff1ebe38f47c1b8c2476f1ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8ddc87dd6c5eaa9306e0ef8b39e1fd697d932ff1ebe38f47c1b8c2476f1ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:47.922197656 +0000 UTC m=+0.024195109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:48.022623065 +0000 UTC m=+0.124620538 container init a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:48.036914248 +0000 UTC m=+0.138911701 container start a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:48.040397774 +0000 UTC m=+0.142395247 container attach a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:14:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:48 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:48 np0005592157 confident_hellman[339687]: {
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:        "osd_id": 0,
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:        "type": "bluestore"
Jan 22 10:14:48 np0005592157 confident_hellman[339687]:    }
Jan 22 10:14:48 np0005592157 confident_hellman[339687]: }
Jan 22 10:14:48 np0005592157 systemd[1]: libpod-a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53.scope: Deactivated successfully.
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:48.899734308 +0000 UTC m=+1.001731771 container died a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 10:14:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ee8ddc87dd6c5eaa9306e0ef8b39e1fd697d932ff1ebe38f47c1b8c2476f1ca6-merged.mount: Deactivated successfully.
Jan 22 10:14:48 np0005592157 podman[339671]: 2026-01-22 15:14:48.985624129 +0000 UTC m=+1.087621592 container remove a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:14:48 np0005592157 systemd[1]: libpod-conmon-a06d933d1529190156e2dfc7418d1003683bdd334bdfd1bb5d82d30471518b53.scope: Deactivated successfully.
Jan 22 10:14:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:14:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:49.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:14:49 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 24edd06b-79e4-4a48-9c94-9476190c207f does not exist
Jan 22 10:14:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ca0e28e5-c6d8-4f0a-bc12-95de7a129acb does not exist
Jan 22 10:14:49 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4f000ff4-e822-4c9c-8314-5c62b4f9f3fa does not exist
Jan 22 10:14:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:50 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:51 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:51 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:52.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:14:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:14:54 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:55.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:55 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:57 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:57 np0005592157 ceph-mon[74359]: Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:14:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:14:58 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:14:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:14:59 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:59 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:01.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:01 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:02 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:03 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:03 np0005592157 ceph-mon[74359]: Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:04 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:15:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:05 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:06 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:06.538 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:15:06 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:06.540 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:15:06 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:07.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:07 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:15:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:15:08 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:08.542 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:15:08 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:09.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:09 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:10 np0005592157 podman[339832]: 2026-01-22 15:15:10.359453852 +0000 UTC m=+0.080247042 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:15:10 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:11.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:11 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:11 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:12 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592157 ceph-mon[74359]: Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:13.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:13 np0005592157 ceph-mon[74359]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:15.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:15 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:17.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:17 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:17 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:18 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:18 np0005592157 podman[339856]: 2026-01-22 15:15:18.380256758 +0000 UTC m=+0.113470123 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:15:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:19.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:19 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:21.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:21 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:23.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:25.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:26.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:27.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:27 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:15:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:15:28 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:29.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:29 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:30.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:31.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:32.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:33.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:33 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:35.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:35 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:36.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:37.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:38.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:39.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:39 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:39 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:40 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:41.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:41 np0005592157 podman[339943]: 2026-01-22 15:15:41.316719641 +0000 UTC m=+0.055292166 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:15:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:41 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:43.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:43 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:43 np0005592157 ceph-mon[74359]: Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:44 np0005592157 ceph-mon[74359]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:44 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:46 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:15:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:15:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:15:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:15:47
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'backups', 'volumes']
Jan 22 10:15:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:47.651 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:47.652 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:15:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:15:47.652 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:15:47 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:47 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:47 np0005592157 ceph-mon[74359]: Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:48 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:49.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:49 np0005592157 podman[340016]: 2026-01-22 15:15:49.345693957 +0000 UTC m=+0.081617856 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:15:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:49 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:50.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:50 np0005592157 podman[340215]: 2026-01-22 15:15:50.287295833 +0000 UTC m=+0.070573583 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:15:50 np0005592157 podman[340215]: 2026-01-22 15:15:50.39651696 +0000 UTC m=+0.179794650 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:15:50 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592157 podman[340370]: 2026-01-22 15:15:51.020233088 +0000 UTC m=+0.076779996 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:15:51 np0005592157 podman[340370]: 2026-01-22 15:15:51.027251581 +0000 UTC m=+0.083798439 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:15:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:51.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:51 np0005592157 podman[340437]: 2026-01-22 15:15:51.266075117 +0000 UTC m=+0.067792564 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Jan 22 10:15:51 np0005592157 podman[340437]: 2026-01-22 15:15:51.308424563 +0000 UTC m=+0.110142020 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64)
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:15:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:15:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 95f89a2a-9732-4f0b-aa85-570dacd874f7 does not exist
Jan 22 10:15:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 00d09668-8bb1-4a8f-a99b-b9139c6a5589 does not exist
Jan 22 10:15:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 146c2f3b-15cf-430d-8cfb-1a66d68c945b does not exist
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:15:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:15:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.591366754 +0000 UTC m=+0.073863825 container create ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:15:53 np0005592157 systemd[1]: Started libpod-conmon-ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96.scope.
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.562012389 +0000 UTC m=+0.044509510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.693091275 +0000 UTC m=+0.175588356 container init ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.705131192 +0000 UTC m=+0.187628273 container start ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.709667394 +0000 UTC m=+0.192164475 container attach ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:15:53 np0005592157 nifty_zhukovsky[340760]: 167 167
Jan 22 10:15:53 np0005592157 systemd[1]: libpod-ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96.scope: Deactivated successfully.
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.713467868 +0000 UTC m=+0.195964949 container died ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:15:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3fb7612771229c961b1c592ae4fc4740af7502f0c70ea0ee1cc4b09dd65d29e8-merged.mount: Deactivated successfully.
Jan 22 10:15:53 np0005592157 podman[340744]: 2026-01-22 15:15:53.766412485 +0000 UTC m=+0.248909556 container remove ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 10:15:53 np0005592157 systemd[1]: libpod-conmon-ae44325ac84ee6fb4e695c11a9cfcf1d2be4a4bc885d83994fa9ba92b0fa9d96.scope: Deactivated successfully.
Jan 22 10:15:54 np0005592157 podman[340786]: 2026-01-22 15:15:54.013393513 +0000 UTC m=+0.074377298 container create 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:15:54 np0005592157 systemd[1]: Started libpod-conmon-926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413.scope.
Jan 22 10:15:54 np0005592157 podman[340786]: 2026-01-22 15:15:53.984500709 +0000 UTC m=+0.045484554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:54 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:54 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:54 np0005592157 podman[340786]: 2026-01-22 15:15:54.109977047 +0000 UTC m=+0.170960892 container init 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:15:54 np0005592157 podman[340786]: 2026-01-22 15:15:54.121758158 +0000 UTC m=+0.182741913 container start 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:15:54 np0005592157 podman[340786]: 2026-01-22 15:15:54.140312376 +0000 UTC m=+0.201296131 container attach 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:15:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:54 np0005592157 naughty_shirley[340801]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:15:54 np0005592157 naughty_shirley[340801]: --> relative data size: 1.0
Jan 22 10:15:54 np0005592157 naughty_shirley[340801]: --> All data devices are unavailable
Jan 22 10:15:55 np0005592157 systemd[1]: libpod-926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413.scope: Deactivated successfully.
Jan 22 10:15:55 np0005592157 podman[340786]: 2026-01-22 15:15:55.015501903 +0000 UTC m=+1.076485668 container died 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:15:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d9e22b4888a8c519bef6a7c6b80c44fc920fb4b7cc9a8e54c179a270ca21cd03-merged.mount: Deactivated successfully.
Jan 22 10:15:55 np0005592157 podman[340786]: 2026-01-22 15:15:55.094092583 +0000 UTC m=+1.155076338 container remove 926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:15:55 np0005592157 systemd[1]: libpod-conmon-926846d896cb5887b518c14c4cbc9bfb146b715aba83631cc7aabec8bfc70413.scope: Deactivated successfully.
Jan 22 10:15:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:15:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:55.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:15:55 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:55 np0005592157 podman[340968]: 2026-01-22 15:15:55.759528111 +0000 UTC m=+0.051034371 container create c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:15:55 np0005592157 systemd[1]: Started libpod-conmon-c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4.scope.
Jan 22 10:15:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:55 np0005592157 podman[340968]: 2026-01-22 15:15:55.738464281 +0000 UTC m=+0.029970551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:55 np0005592157 podman[340968]: 2026-01-22 15:15:55.966577783 +0000 UTC m=+0.258084093 container init c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:15:55 np0005592157 podman[340968]: 2026-01-22 15:15:55.978956989 +0000 UTC m=+0.270463239 container start c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:15:55 np0005592157 systemd[1]: libpod-c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4.scope: Deactivated successfully.
Jan 22 10:15:55 np0005592157 quirky_rosalind[340985]: 167 167
Jan 22 10:15:55 np0005592157 conmon[340985]: conmon c8cd0c99ee1fb1c5a6a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4.scope/container/memory.events
Jan 22 10:15:56 np0005592157 podman[340968]: 2026-01-22 15:15:56.126667186 +0000 UTC m=+0.418173476 container attach c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:15:56 np0005592157 podman[340968]: 2026-01-22 15:15:56.127441215 +0000 UTC m=+0.418947485 container died c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:15:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cc6426d5aac00f203a1e246da342ce767425d333ceabbb75d98327dc21d50f50-merged.mount: Deactivated successfully.
Jan 22 10:15:56 np0005592157 podman[340968]: 2026-01-22 15:15:56.297531944 +0000 UTC m=+0.589038204 container remove c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:15:56 np0005592157 systemd[1]: libpod-conmon-c8cd0c99ee1fb1c5a6a6c3548a00938464562f6af6d518a926df90f76cd0a6e4.scope: Deactivated successfully.
Jan 22 10:15:56 np0005592157 podman[341009]: 2026-01-22 15:15:56.504513142 +0000 UTC m=+0.053061720 container create 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:15:56 np0005592157 systemd[1]: Started libpod-conmon-3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0.scope.
Jan 22 10:15:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:56 np0005592157 podman[341009]: 2026-01-22 15:15:56.478033349 +0000 UTC m=+0.026582007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed932225a27e90ddf898188c3471d87a74551832b8e4431c995602fde19d2345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed932225a27e90ddf898188c3471d87a74551832b8e4431c995602fde19d2345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed932225a27e90ddf898188c3471d87a74551832b8e4431c995602fde19d2345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed932225a27e90ddf898188c3471d87a74551832b8e4431c995602fde19d2345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:56 np0005592157 podman[341009]: 2026-01-22 15:15:56.591236033 +0000 UTC m=+0.139784641 container init 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:15:56 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:56 np0005592157 podman[341009]: 2026-01-22 15:15:56.597392896 +0000 UTC m=+0.145941484 container start 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:15:56 np0005592157 podman[341009]: 2026-01-22 15:15:56.601043586 +0000 UTC m=+0.149592164 container attach 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:15:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]: {
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:    "0": [
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:        {
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "devices": [
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "/dev/loop3"
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            ],
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "lv_name": "ceph_lv0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "lv_size": "7511998464",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "name": "ceph_lv0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "tags": {
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.cluster_name": "ceph",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.crush_device_class": "",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.encrypted": "0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.osd_id": "0",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.type": "block",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:                "ceph.vdo": "0"
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            },
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "type": "block",
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:            "vg_name": "ceph_vg0"
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:        }
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]:    ]
Jan 22 10:15:57 np0005592157 flamboyant_kowalevski[341026]: }
Jan 22 10:15:57 np0005592157 systemd[1]: libpod-3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0.scope: Deactivated successfully.
Jan 22 10:15:57 np0005592157 podman[341009]: 2026-01-22 15:15:57.375148517 +0000 UTC m=+0.923697095 container died 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:15:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ed932225a27e90ddf898188c3471d87a74551832b8e4431c995602fde19d2345-merged.mount: Deactivated successfully.
Jan 22 10:15:57 np0005592157 podman[341009]: 2026-01-22 15:15:57.450167789 +0000 UTC m=+0.998716377 container remove 3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:15:57 np0005592157 systemd[1]: libpod-conmon-3e25b0cd60f1608661de4034d2f203e5cead740a50bfd3d0387c3f08653e86f0.scope: Deactivated successfully.
Jan 22 10:15:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:57 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592157 ceph-mon[74359]: Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.082825668 +0000 UTC m=+0.045367751 container create 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 22 10:15:58 np0005592157 systemd[1]: Started libpod-conmon-5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16.scope.
Jan 22 10:15:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.060959398 +0000 UTC m=+0.023501471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.156063336 +0000 UTC m=+0.118605419 container init 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.161574183 +0000 UTC m=+0.124116236 container start 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:15:58 np0005592157 wonderful_ride[341206]: 167 167
Jan 22 10:15:58 np0005592157 systemd[1]: libpod-5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16.scope: Deactivated successfully.
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.165208492 +0000 UTC m=+0.127750575 container attach 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.165450048 +0000 UTC m=+0.127992111 container died 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:15:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b9186af19ce9f60db33fa400090d043a265c42a2eda666b3be6060028c2bff35-merged.mount: Deactivated successfully.
Jan 22 10:15:58 np0005592157 podman[341190]: 2026-01-22 15:15:58.201402896 +0000 UTC m=+0.163944949 container remove 5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ride, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 10:15:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:15:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:58.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:15:58 np0005592157 systemd[1]: libpod-conmon-5acad7e999e3963689aedf0aa895c45766b2fc0ff7efee17550f831206c6ae16.scope: Deactivated successfully.
Jan 22 10:15:58 np0005592157 podman[341231]: 2026-01-22 15:15:58.371408963 +0000 UTC m=+0.047737710 container create d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:15:58 np0005592157 systemd[1]: Started libpod-conmon-d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d.scope.
Jan 22 10:15:58 np0005592157 podman[341231]: 2026-01-22 15:15:58.346352324 +0000 UTC m=+0.022681161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:15:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:15:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e3e1e7553ac2fe2dfb74315ed36ccda136c19d1b39a8d806c601fb70c2a652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e3e1e7553ac2fe2dfb74315ed36ccda136c19d1b39a8d806c601fb70c2a652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e3e1e7553ac2fe2dfb74315ed36ccda136c19d1b39a8d806c601fb70c2a652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e3e1e7553ac2fe2dfb74315ed36ccda136c19d1b39a8d806c601fb70c2a652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:15:58 np0005592157 podman[341231]: 2026-01-22 15:15:58.469178007 +0000 UTC m=+0.145506784 container init d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:15:58 np0005592157 podman[341231]: 2026-01-22 15:15:58.479182604 +0000 UTC m=+0.155511361 container start d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:15:58 np0005592157 podman[341231]: 2026-01-22 15:15:58.483260014 +0000 UTC m=+0.159588801 container attach d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:15:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:15:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]: {
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:        "osd_id": 0,
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:        "type": "bluestore"
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]:    }
Jan 22 10:15:59 np0005592157 hopeful_heyrovsky[341247]: }
Jan 22 10:15:59 np0005592157 systemd[1]: libpod-d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d.scope: Deactivated successfully.
Jan 22 10:15:59 np0005592157 podman[341268]: 2026-01-22 15:15:59.43982277 +0000 UTC m=+0.032031582 container died d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:15:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e1e3e1e7553ac2fe2dfb74315ed36ccda136c19d1b39a8d806c601fb70c2a652-merged.mount: Deactivated successfully.
Jan 22 10:15:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:15:59 np0005592157 podman[341268]: 2026-01-22 15:15:59.520373318 +0000 UTC m=+0.112582090 container remove d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:15:59 np0005592157 systemd[1]: libpod-conmon-d125323d3e49421b87a90d9470b60ba4334fbde3da200ba93e12ee4d15a5210d.scope: Deactivated successfully.
Jan 22 10:15:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:15:59 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:16:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e91502b1-ecd2-4f5a-a6f9-0cb59feda8fb does not exist
Jan 22 10:16:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9b3a5bbd-6077-47f0-9ad2-e2b16d004a7b does not exist
Jan 22 10:16:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 44d2b9c1-4f5e-4839-b98c-16353109f73f does not exist
Jan 22 10:16:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:01 np0005592157 ceph-mon[74359]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:01 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:02 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:03 np0005592157 ceph-mon[74359]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:04 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:16:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:06.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:06 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:07 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:07 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:09.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:09 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:10 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:10.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:11 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:12 np0005592157 podman[341390]: 2026-01-22 15:16:12.386157277 +0000 UTC m=+0.107933736 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 10:16:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:12 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:13.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:13 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:13 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:14.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:14 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:15.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:15 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:17.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:17 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:18.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:19.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:20 np0005592157 podman[341414]: 2026-01-22 15:16:20.384758136 +0000 UTC m=+0.111095704 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:16:20 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:21.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:21 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:23 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:23 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:23.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:24 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:24.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:24.411 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:16:24 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:24.413 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:16:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:25.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:25 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:26 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:26.415 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:16:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:27.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:27 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592157 ceph-mon[74359]: Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:29.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:29 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:31 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:32 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:33 np0005592157 ceph-mon[74359]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:34 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:16:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:16:35 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:36.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:36 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:37.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:37 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592157 ceph-mon[74359]: Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:37 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:38.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:38 np0005592157 ceph-mon[74359]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:16:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:39.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:40.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:40 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:40 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:41.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:42 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:42.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:43 np0005592157 podman[341500]: 2026-01-22 15:16:43.331818859 +0000 UTC m=+0.055073761 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 10:16:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:43 np0005592157 ceph-mon[74359]: Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:43 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:44.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.716104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005716159, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1683, "num_deletes": 446, "total_data_size": 2197630, "memory_usage": 2239312, "flush_reason": "Manual Compaction"}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005734097, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 2139298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91679, "largest_seqno": 93360, "table_properties": {"data_size": 2132203, "index_size": 3524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22351, "raw_average_key_size": 22, "raw_value_size": 2115184, "raw_average_value_size": 2149, "num_data_blocks": 153, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 446, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094887, "oldest_key_time": 1769094887, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 18067 microseconds, and 7255 cpu microseconds.
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734168) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 2139298 bytes OK
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734189) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.736140) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.736157) EVENT_LOG_v1 {"time_micros": 1769095005736151, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.736176) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 2189411, prev total WAL file size 2189411, number of live WAL files 2.
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.737049) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(2089KB)], [209(9894KB)]
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005737091, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 12271141, "oldest_snapshot_seqno": -1}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 14081 keys, 10400980 bytes, temperature: kUnknown
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005808037, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 10400980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10326525, "index_size": 38118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35269, "raw_key_size": 388103, "raw_average_key_size": 27, "raw_value_size": 10089293, "raw_average_value_size": 716, "num_data_blocks": 1367, "num_entries": 14081, "num_filter_entries": 14081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.808292) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 10400980 bytes
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.810003) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.8 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 14986, records dropped: 905 output_compression: NoCompression
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.810021) EVENT_LOG_v1 {"time_micros": 1769095005810013, "job": 132, "event": "compaction_finished", "compaction_time_micros": 71032, "compaction_time_cpu_micros": 25760, "output_level": 6, "num_output_files": 1, "total_output_size": 10400980, "num_input_records": 14986, "num_output_records": 14081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005810423, "job": 132, "event": "table_file_deletion", "file_number": 211}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005812221, "job": 132, "event": "table_file_deletion", "file_number": 209}
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.736989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.812299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.812306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.812308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.812310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:16:45.812311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:46.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:16:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:16:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:47.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:16:47
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups']
Jan 22 10:16:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:47.652 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:16:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:16:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:16:47 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:48 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592157 ceph-mon[74359]: Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:48 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:49 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592157 podman[341574]: 2026-01-22 15:16:51.349475088 +0000 UTC m=+0.088861955 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 10:16:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:52 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:53 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:54.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:54 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:16:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:16:56 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:56 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:57.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:16:57 np0005592157 ceph-mon[74359]: Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:57 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:16:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:16:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:16:58 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:16:58 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:16:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:59.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 10:16:59 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:00 np0005592157 ceph-mon[74359]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:01.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 10:17:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:17:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:17:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:02.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:02 np0005592157 ceph-mon[74359]: 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 10:17:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:17:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:03.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 08254324-5414-48f3-98d1-c7d046629d4a does not exist
Jan 22 10:17:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 34624ae1-8c74-490e-aa0a-00b3b8969ba5 does not exist
Jan 22 10:17:03 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 55e5f089-92c3-4850-af47-44933f5d1c99 does not exist
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 10:17:03 np0005592157 podman[341878]: 2026-01-22 15:17:03.794825127 +0000 UTC m=+0.050937689 container create 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:17:03 np0005592157 systemd[1]: Started libpod-conmon-04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc.scope.
Jan 22 10:17:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:03 np0005592157 podman[341878]: 2026-01-22 15:17:03.772516686 +0000 UTC m=+0.028629278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:17:03 np0005592157 podman[341878]: 2026-01-22 15:17:03.971463788 +0000 UTC m=+0.227576360 container init 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:17:03 np0005592157 podman[341878]: 2026-01-22 15:17:03.984252764 +0000 UTC m=+0.240365316 container start 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:17:03 np0005592157 upbeat_fermi[341894]: 167 167
Jan 22 10:17:03 np0005592157 systemd[1]: libpod-04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc.scope: Deactivated successfully.
Jan 22 10:17:04 np0005592157 podman[341878]: 2026-01-22 15:17:04.245152625 +0000 UTC m=+0.501265277 container attach 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:17:04 np0005592157 podman[341878]: 2026-01-22 15:17:04.246329634 +0000 UTC m=+0.502442226 container died 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:17:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:04.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ee93ed6962f46f76c94191ce0ecb5fdb5fa3a0259c71563ef0bb98cfad915b37-merged.mount: Deactivated successfully.
Jan 22 10:17:04 np0005592157 podman[341878]: 2026-01-22 15:17:04.743496107 +0000 UTC m=+0.999608699 container remove 04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:17:04 np0005592157 systemd[1]: libpod-conmon-04a2e13ca0d1a8e39f59a623aaa52db2fbc1570f38f51e1b2f894f4023ac17fc.scope: Deactivated successfully.
Jan 22 10:17:04 np0005592157 podman[341919]: 2026-01-22 15:17:04.947205806 +0000 UTC m=+0.058528476 container create f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:17:04 np0005592157 systemd[1]: Started libpod-conmon-f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e.scope.
Jan 22 10:17:05 np0005592157 podman[341919]: 2026-01-22 15:17:04.91294997 +0000 UTC m=+0.024272680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:05 np0005592157 ceph-mon[74359]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:05 np0005592157 podman[341919]: 2026-01-22 15:17:05.044227381 +0000 UTC m=+0.155550101 container init f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:17:05 np0005592157 podman[341919]: 2026-01-22 15:17:05.066266405 +0000 UTC m=+0.177589055 container start f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:17:05 np0005592157 podman[341919]: 2026-01-22 15:17:05.075921244 +0000 UTC m=+0.187244044 container attach f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:17:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:05.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 10:17:05 np0005592157 eloquent_noether[341933]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:17:05 np0005592157 eloquent_noether[341933]: --> relative data size: 1.0
Jan 22 10:17:05 np0005592157 eloquent_noether[341933]: --> All data devices are unavailable
Jan 22 10:17:05 np0005592157 systemd[1]: libpod-f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e.scope: Deactivated successfully.
Jan 22 10:17:05 np0005592157 podman[341919]: 2026-01-22 15:17:05.895414505 +0000 UTC m=+1.006737175 container died f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:17:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-dbef3c66267d5760ee91af491f89f7cbd178c4d50350acaaeee9b6164da65d34-merged.mount: Deactivated successfully.
Jan 22 10:17:06 np0005592157 podman[341919]: 2026-01-22 15:17:06.802467238 +0000 UTC m=+1.913789868 container remove f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:17:06 np0005592157 systemd[1]: libpod-conmon-f9adac5448aaf700d98b499db8a437173750fd65799cd44be7c8df53343f697e.scope: Deactivated successfully.
Jan 22 10:17:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:07.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:07 np0005592157 ceph-mon[74359]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.520191287 +0000 UTC m=+0.045771501 container create c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:17:07 np0005592157 systemd[1]: Started libpod-conmon-c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd.scope.
Jan 22 10:17:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.503780922 +0000 UTC m=+0.029361146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.603101224 +0000 UTC m=+0.128681488 container init c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.615353597 +0000 UTC m=+0.140933811 container start c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.619826987 +0000 UTC m=+0.145407211 container attach c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:17:07 np0005592157 cool_maxwell[342169]: 167 167
Jan 22 10:17:07 np0005592157 systemd[1]: libpod-c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd.scope: Deactivated successfully.
Jan 22 10:17:07 np0005592157 conmon[342169]: conmon c9382770df1ce4ee83e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd.scope/container/memory.events
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.623492968 +0000 UTC m=+0.149073232 container died c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:17:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-20020c12b5afb6b1b5a5b1afb98bcaf336a4fa6cf529d88d7e7779895ca487f0-merged.mount: Deactivated successfully.
Jan 22 10:17:07 np0005592157 podman[342153]: 2026-01-22 15:17:07.663902225 +0000 UTC m=+0.189482479 container remove c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:17:07 np0005592157 systemd[1]: libpod-conmon-c9382770df1ce4ee83e8a3fd174a688172bf49d39d5a19807a969faf39aa46fd.scope: Deactivated successfully.
Jan 22 10:17:07 np0005592157 podman[342195]: 2026-01-22 15:17:07.818733698 +0000 UTC m=+0.022377644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:07 np0005592157 podman[342195]: 2026-01-22 15:17:07.941365355 +0000 UTC m=+0.145009311 container create b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 22 10:17:08 np0005592157 systemd[1]: Started libpod-conmon-b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4.scope.
Jan 22 10:17:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27d862dad765f0df1e36b5fd2c30d754659d1d588184168477714a54618a5c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27d862dad765f0df1e36b5fd2c30d754659d1d588184168477714a54618a5c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27d862dad765f0df1e36b5fd2c30d754659d1d588184168477714a54618a5c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27d862dad765f0df1e36b5fd2c30d754659d1d588184168477714a54618a5c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:08 np0005592157 podman[342195]: 2026-01-22 15:17:08.155254235 +0000 UTC m=+0.358898201 container init b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:17:08 np0005592157 podman[342195]: 2026-01-22 15:17:08.166596505 +0000 UTC m=+0.370240461 container start b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:17:08 np0005592157 podman[342195]: 2026-01-22 15:17:08.170554033 +0000 UTC m=+0.374197999 container attach b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:17:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592157 ceph-mon[74359]: Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:08 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]: {
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:    "0": [
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:        {
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "devices": [
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "/dev/loop3"
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            ],
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "lv_name": "ceph_lv0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "lv_size": "7511998464",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "name": "ceph_lv0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "tags": {
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.cluster_name": "ceph",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.crush_device_class": "",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.encrypted": "0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.osd_id": "0",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.type": "block",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:                "ceph.vdo": "0"
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            },
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "type": "block",
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:            "vg_name": "ceph_vg0"
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:        }
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]:    ]
Jan 22 10:17:08 np0005592157 wizardly_dewdney[342211]: }
Jan 22 10:17:08 np0005592157 systemd[1]: libpod-b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4.scope: Deactivated successfully.
Jan 22 10:17:08 np0005592157 podman[342195]: 2026-01-22 15:17:08.943649139 +0000 UTC m=+1.147293105 container died b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:17:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b27d862dad765f0df1e36b5fd2c30d754659d1d588184168477714a54618a5c1-merged.mount: Deactivated successfully.
Jan 22 10:17:09 np0005592157 podman[342195]: 2026-01-22 15:17:09.018158738 +0000 UTC m=+1.221802675 container remove b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:17:09 np0005592157 systemd[1]: libpod-conmon-b0d9f5f99742a016ff2d744fcf0bc31d665c8441a52752072eb7d50ba43934d4.scope: Deactivated successfully.
Jan 22 10:17:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 852 B/s wr, 116 op/s
Jan 22 10:17:09 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.806223964 +0000 UTC m=+0.046534190 container create fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 10:17:09 np0005592157 systemd[1]: Started libpod-conmon-fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85.scope.
Jan 22 10:17:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.791334197 +0000 UTC m=+0.031644423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.897684782 +0000 UTC m=+0.137995068 container init fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.904116821 +0000 UTC m=+0.144427047 container start fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.908620952 +0000 UTC m=+0.148931258 container attach fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:17:09 np0005592157 beautiful_greider[342387]: 167 167
Jan 22 10:17:09 np0005592157 systemd[1]: libpod-fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85.scope: Deactivated successfully.
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.912040107 +0000 UTC m=+0.152350383 container died fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 10:17:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f87fc97c7703fe6f4cc4d32b3676ecfafa5b745a4516925f5fe5e1a79e79689f-merged.mount: Deactivated successfully.
Jan 22 10:17:09 np0005592157 podman[342370]: 2026-01-22 15:17:09.959435657 +0000 UTC m=+0.199745873 container remove fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:17:09 np0005592157 systemd[1]: libpod-conmon-fd803d440b21f3416432f6c8d97141d041346ab9d8c9d3a9e7fee785ce844d85.scope: Deactivated successfully.
Jan 22 10:17:10 np0005592157 podman[342413]: 2026-01-22 15:17:10.168415826 +0000 UTC m=+0.069925217 container create 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 10:17:10 np0005592157 systemd[1]: Started libpod-conmon-73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708.scope.
Jan 22 10:17:10 np0005592157 podman[342413]: 2026-01-22 15:17:10.141263576 +0000 UTC m=+0.042773007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:17:10 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:17:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3402a0bc390ca21e6d0b0f7e977683d88de0b411188f0d4efaa52e064e4b81a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3402a0bc390ca21e6d0b0f7e977683d88de0b411188f0d4efaa52e064e4b81a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3402a0bc390ca21e6d0b0f7e977683d88de0b411188f0d4efaa52e064e4b81a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:10 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3402a0bc390ca21e6d0b0f7e977683d88de0b411188f0d4efaa52e064e4b81a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:17:10 np0005592157 podman[342413]: 2026-01-22 15:17:10.272513056 +0000 UTC m=+0.174022437 container init 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:17:10 np0005592157 podman[342413]: 2026-01-22 15:17:10.281697153 +0000 UTC m=+0.183206544 container start 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:17:10 np0005592157 podman[342413]: 2026-01-22 15:17:10.28683144 +0000 UTC m=+0.188340891 container attach 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:17:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:10.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]: {
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:        "osd_id": 0,
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:        "type": "bluestore"
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]:    }
Jan 22 10:17:11 np0005592157 affectionate_brahmagupta[342429]: }
Jan 22 10:17:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:11.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:11 np0005592157 systemd[1]: libpod-73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708.scope: Deactivated successfully.
Jan 22 10:17:11 np0005592157 podman[342413]: 2026-01-22 15:17:11.220998242 +0000 UTC m=+1.122507603 container died 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:17:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b3402a0bc390ca21e6d0b0f7e977683d88de0b411188f0d4efaa52e064e4b81a-merged.mount: Deactivated successfully.
Jan 22 10:17:11 np0005592157 podman[342413]: 2026-01-22 15:17:11.280612394 +0000 UTC m=+1.182121745 container remove 73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:17:11 np0005592157 systemd[1]: libpod-conmon-73654ac276db2bb88266a65583b6b334a9d547ac3a5d1ed86b519da1edf44708.scope: Deactivated successfully.
Jan 22 10:17:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:17:11 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 151 op/s
Jan 22 10:17:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:17:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8cab680d-bbfb-493f-b079-c92fec394d33 does not exist
Jan 22 10:17:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d1b03f10-7f0b-4e9b-940f-60ff3221c217 does not exist
Jan 22 10:17:11 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f5d0f95a-e768-4251-a527-2edab9a16b0d does not exist
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:12.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:13.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 682 B/s wr, 131 op/s
Jan 22 10:17:14 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:14 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:14 np0005592157 podman[342513]: 2026-01-22 15:17:14.313505449 +0000 UTC m=+0.053671366 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:17:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:14.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:15.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:15 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 682 B/s wr, 175 op/s
Jan 22 10:17:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 10:17:18 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:19 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 10:17:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 10:17:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:22 np0005592157 podman[342537]: 2026-01-22 15:17:22.387029085 +0000 UTC m=+0.113867862 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:17:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 10:17:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:24.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:24 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:25 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:25.022 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:17:25 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:25.023 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:17:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 10:17:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:26 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:28.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:30.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:32.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592157 ceph-mon[74359]: Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:34 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:34.025 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:17:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:35 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:36.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592157 ceph-mon[74359]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592157 ceph-mon[74359]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:17:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:38.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:38 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:39.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:39 np0005592157 ceph-mon[74359]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:39 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:40.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:41 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:43 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:43 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:44 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:45 np0005592157 podman[342623]: 2026-01-22 15:17:45.312901253 +0000 UTC m=+0.053836798 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:17:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:45 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:45 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:17:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:17:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:17:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:17:47
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 22 10:17:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:17:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:17:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:17:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:48.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:49 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:49 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:50 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:51.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:53 np0005592157 podman[342698]: 2026-01-22 15:17:53.380917588 +0000 UTC m=+0.123055969 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 10:17:53 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:55.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:55 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:56.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:57 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:17:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:17:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:58 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:17:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:58.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:17:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:17:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:17:59 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:59 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:01 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:02 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:02.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:03.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:03 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:03 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:04.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:04 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:04 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:18:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:05.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:06.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:06 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:06 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:07.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:07 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:08.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:08 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:09.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:09 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:10 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:11.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:12.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:18:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:18:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev df4fb9ee-0a3c-4d61-9459-333c00178cf5 does not exist
Jan 22 10:18:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6b18dd06-b684-4ee6-a0f5-c26ccead377d does not exist
Jan 22 10:18:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9dd736fd-2c5d-4703-b7b8-3607298a8030 does not exist
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:18:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:18:14 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:18:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:18:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:14.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.674448146 +0000 UTC m=+0.059388853 container create da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:18:14 np0005592157 systemd[1]: Started libpod-conmon-da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc.scope.
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.646303603 +0000 UTC m=+0.031244390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.768565511 +0000 UTC m=+0.153506228 container init da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.778766439 +0000 UTC m=+0.163707146 container start da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.782352426 +0000 UTC m=+0.167293143 container attach da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 10:18:14 np0005592157 condescending_greider[343192]: 167 167
Jan 22 10:18:14 np0005592157 systemd[1]: libpod-da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc.scope: Deactivated successfully.
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.784007956 +0000 UTC m=+0.168948643 container died da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:18:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cb556fb8dd81f321a47b8e232f6873ca51f4a326ebff65ef42a7397e32f3c02d-merged.mount: Deactivated successfully.
Jan 22 10:18:14 np0005592157 podman[343176]: 2026-01-22 15:18:14.823976887 +0000 UTC m=+0.208917564 container remove da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:18:14 np0005592157 systemd[1]: libpod-conmon-da7e95a54eb9db943848a417bdf6dec95d15c05cade3bf7a8abf670ef803e2dc.scope: Deactivated successfully.
Jan 22 10:18:15 np0005592157 podman[343215]: 2026-01-22 15:18:15.061493443 +0000 UTC m=+0.073759411 container create 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:18:15 np0005592157 systemd[1]: Started libpod-conmon-928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11.scope.
Jan 22 10:18:15 np0005592157 podman[343215]: 2026-01-22 15:18:15.03377069 +0000 UTC m=+0.046036738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:15 np0005592157 podman[343215]: 2026-01-22 15:18:15.17339192 +0000 UTC m=+0.185657898 container init 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:18:15 np0005592157 podman[343215]: 2026-01-22 15:18:15.190575117 +0000 UTC m=+0.202841115 container start 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:18:15 np0005592157 podman[343215]: 2026-01-22 15:18:15.194826481 +0000 UTC m=+0.207092449 container attach 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:18:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:15 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:16 np0005592157 sleepy_mendel[343231]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:18:16 np0005592157 sleepy_mendel[343231]: --> relative data size: 1.0
Jan 22 10:18:16 np0005592157 sleepy_mendel[343231]: --> All data devices are unavailable
Jan 22 10:18:16 np0005592157 systemd[1]: libpod-928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11.scope: Deactivated successfully.
Jan 22 10:18:16 np0005592157 podman[343215]: 2026-01-22 15:18:16.045073613 +0000 UTC m=+1.057339691 container died 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:18:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ab75fa99ce67878daf77872ec59c96ced83550d689391435418bec6c39150df4-merged.mount: Deactivated successfully.
Jan 22 10:18:16 np0005592157 podman[343215]: 2026-01-22 15:18:16.12937406 +0000 UTC m=+1.141640028 container remove 928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:18:16 np0005592157 systemd[1]: libpod-conmon-928ca08ae157c48b6cabd550f7e41884f139f35069a4f68a936d851a17864a11.scope: Deactivated successfully.
Jan 22 10:18:16 np0005592157 podman[343248]: 2026-01-22 15:18:16.150437111 +0000 UTC m=+0.068341910 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:18:16 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:16.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.867181404 +0000 UTC m=+0.040769791 container create c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:18:16 np0005592157 systemd[1]: Started libpod-conmon-c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf.scope.
Jan 22 10:18:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.851342609 +0000 UTC m=+0.024931016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.95641839 +0000 UTC m=+0.130006807 container init c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.964359813 +0000 UTC m=+0.137948200 container start c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.9679553 +0000 UTC m=+0.141543687 container attach c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:18:16 np0005592157 angry_spence[343437]: 167 167
Jan 22 10:18:16 np0005592157 systemd[1]: libpod-c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf.scope: Deactivated successfully.
Jan 22 10:18:16 np0005592157 podman[343421]: 2026-01-22 15:18:16.973164337 +0000 UTC m=+0.146752724 container died c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:18:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-16850d7de2c7260ac6d0edf1d8cec8f211f3e545774ce7f34619e2b42e7733b9-merged.mount: Deactivated successfully.
Jan 22 10:18:17 np0005592157 podman[343421]: 2026-01-22 15:18:17.019684466 +0000 UTC m=+0.193272883 container remove c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_spence, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:18:17 np0005592157 systemd[1]: libpod-conmon-c3681e6e8014b4f5a0e9f2619fa3e35da291b7945eebfad5e2f2564c737803cf.scope: Deactivated successfully.
Jan 22 10:18:17 np0005592157 podman[343459]: 2026-01-22 15:18:17.171041261 +0000 UTC m=+0.039878429 container create 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:18:17 np0005592157 systemd[1]: Started libpod-conmon-9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e.scope.
Jan 22 10:18:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad27bf8a8b28060c0671ea2cbe9714f1707296f54c0bf83f9b8da427d7f14e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad27bf8a8b28060c0671ea2cbe9714f1707296f54c0bf83f9b8da427d7f14e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad27bf8a8b28060c0671ea2cbe9714f1707296f54c0bf83f9b8da427d7f14e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad27bf8a8b28060c0671ea2cbe9714f1707296f54c0bf83f9b8da427d7f14e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:17 np0005592157 podman[343459]: 2026-01-22 15:18:17.153550657 +0000 UTC m=+0.022387855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:17 np0005592157 podman[343459]: 2026-01-22 15:18:17.257645894 +0000 UTC m=+0.126483092 container init 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:18:17 np0005592157 podman[343459]: 2026-01-22 15:18:17.263600368 +0000 UTC m=+0.132437546 container start 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 22 10:18:17 np0005592157 podman[343459]: 2026-01-22 15:18:17.267130684 +0000 UTC m=+0.135967862 container attach 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:18:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:17.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:17 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:17 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]: {
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:    "0": [
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:        {
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "devices": [
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "/dev/loop3"
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            ],
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "lv_name": "ceph_lv0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "lv_size": "7511998464",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "name": "ceph_lv0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "tags": {
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.cluster_name": "ceph",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.crush_device_class": "",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.encrypted": "0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.osd_id": "0",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.type": "block",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:                "ceph.vdo": "0"
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            },
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "type": "block",
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:            "vg_name": "ceph_vg0"
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:        }
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]:    ]
Jan 22 10:18:18 np0005592157 mystifying_brown[343475]: }
Jan 22 10:18:18 np0005592157 systemd[1]: libpod-9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e.scope: Deactivated successfully.
Jan 22 10:18:18 np0005592157 podman[343459]: 2026-01-22 15:18:18.031543634 +0000 UTC m=+0.900380812 container died 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 22 10:18:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1ad27bf8a8b28060c0671ea2cbe9714f1707296f54c0bf83f9b8da427d7f14e9-merged.mount: Deactivated successfully.
Jan 22 10:18:18 np0005592157 podman[343459]: 2026-01-22 15:18:18.084832358 +0000 UTC m=+0.953669526 container remove 9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brown, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:18:18 np0005592157 systemd[1]: libpod-conmon-9e7978fe0aee8078d09be15ab03cb93ce8089835d2fb9e3bd68174966fd6ee9e.scope: Deactivated successfully.
Jan 22 10:18:18 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:18.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:18:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:18:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:18:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.732109823 +0000 UTC m=+0.043808415 container create d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:18:18 np0005592157 systemd[1]: Started libpod-conmon-d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5.scope.
Jan 22 10:18:18 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.710521369 +0000 UTC m=+0.022220041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.807815771 +0000 UTC m=+0.119514383 container init d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.814028632 +0000 UTC m=+0.125727224 container start d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:18:18 np0005592157 upbeat_sutherland[343655]: 167 167
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.818189683 +0000 UTC m=+0.129888275 container attach d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:18:18 np0005592157 systemd[1]: libpod-d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5.scope: Deactivated successfully.
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.819424633 +0000 UTC m=+0.131123235 container died d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:18:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6797086d6889759c0a48cbd8e0e772e19d088e5ad1475c3f704203b8fa06ef3f-merged.mount: Deactivated successfully.
Jan 22 10:18:18 np0005592157 podman[343639]: 2026-01-22 15:18:18.856467222 +0000 UTC m=+0.168165814 container remove d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:18:18 np0005592157 systemd[1]: libpod-conmon-d3451c7d3b603f81c7e90b15041d39a826106251e0247ff6c9fa6004023fbca5.scope: Deactivated successfully.
Jan 22 10:18:19 np0005592157 podman[343678]: 2026-01-22 15:18:19.022140135 +0000 UTC m=+0.043891817 container create 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:18:19 np0005592157 systemd[1]: Started libpod-conmon-53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba.scope.
Jan 22 10:18:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:18:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d58d8fed1a6d0b6fee3c273c5b8dd34d012449ebcd483784b6d9a053116dca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d58d8fed1a6d0b6fee3c273c5b8dd34d012449ebcd483784b6d9a053116dca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d58d8fed1a6d0b6fee3c273c5b8dd34d012449ebcd483784b6d9a053116dca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d58d8fed1a6d0b6fee3c273c5b8dd34d012449ebcd483784b6d9a053116dca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:18:19 np0005592157 podman[343678]: 2026-01-22 15:18:19.101339228 +0000 UTC m=+0.123090930 container init 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:18:19 np0005592157 podman[343678]: 2026-01-22 15:18:19.006228538 +0000 UTC m=+0.027980240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:18:19 np0005592157 podman[343678]: 2026-01-22 15:18:19.109032755 +0000 UTC m=+0.130784437 container start 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:18:19 np0005592157 podman[343678]: 2026-01-22 15:18:19.112471268 +0000 UTC m=+0.134222980 container attach 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:18:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:19.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:19 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]: {
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:        "osd_id": 0,
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:        "type": "bluestore"
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]:    }
Jan 22 10:18:19 np0005592157 heuristic_chatelet[343694]: }
Jan 22 10:18:19 np0005592157 systemd[1]: libpod-53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba.scope: Deactivated successfully.
Jan 22 10:18:19 np0005592157 podman[343717]: 2026-01-22 15:18:19.979736964 +0000 UTC m=+0.026674329 container died 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:18:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-44d58d8fed1a6d0b6fee3c273c5b8dd34d012449ebcd483784b6d9a053116dca-merged.mount: Deactivated successfully.
Jan 22 10:18:20 np0005592157 podman[343717]: 2026-01-22 15:18:20.028747734 +0000 UTC m=+0.075685079 container remove 53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 10:18:20 np0005592157 systemd[1]: libpod-conmon-53f76aa82d0ed013a11b2bfb04890ec695f41b000125747f7f943ad4c616c1ba.scope: Deactivated successfully.
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 64bb5643-0d92-4332-99c5-a5b7b6a18068 does not exist
Jan 22 10:18:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b8ef6b91-2564-45b6-8eef-ee79a7d662cc does not exist
Jan 22 10:18:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5025a3b3-7d87-4f0f-88db-17bb05d93b31 does not exist
Jan 22 10:18:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:20.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:20 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:18:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:18:23 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:24 np0005592157 podman[343784]: 2026-01-22 15:18:24.34465375 +0000 UTC m=+0.084307648 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 10:18:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:24.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:25 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:25 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:26 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:27.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:27 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:28.013 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:18:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:28.015 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:18:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:28.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:28 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:18:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:29.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:18:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:29 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:30.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:30 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:31 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:32 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:32.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:33 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:33 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:34 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:34.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:35 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:35.018 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:18:35 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:36 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:36.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:37 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:38 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:38 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:18:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:38.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:18:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:39.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:39 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:40 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:41 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:42.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:42 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:18:42 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:18:43 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:43 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:44 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:44.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:45.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:45 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:46 np0005592157 podman[343923]: 2026-01-22 15:18:46.331949991 +0000 UTC m=+0.087090296 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:18:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:46.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:18:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:18:47 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:47.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:18:47
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 22 10:18:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:18:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:47.653 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:47.654 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:18:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:18:47.654 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.071238) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128071290, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1739, "num_deletes": 459, "total_data_size": 2345978, "memory_usage": 2387536, "flush_reason": "Manual Compaction"}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128112649, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 2286898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93361, "largest_seqno": 95099, "table_properties": {"data_size": 2279320, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23062, "raw_average_key_size": 22, "raw_value_size": 2261434, "raw_average_value_size": 2217, "num_data_blocks": 171, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 459, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095006, "oldest_key_time": 1769095006, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 41499 microseconds, and 7386 cpu microseconds.
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.112729) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 2286898 bytes OK
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.112755) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.117743) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.117760) EVENT_LOG_v1 {"time_micros": 1769095128117755, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.117779) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2337462, prev total WAL file size 2337462, number of live WAL files 2.
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.118451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373833' seq:72057594037927935, type:22 .. '6C6F676D0035303335' seq:0, type:0; will stop at (end)
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(2233KB)], [212(10157KB)]
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128118514, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 12687878, "oldest_snapshot_seqno": -1}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 14168 keys, 12485140 bytes, temperature: kUnknown
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128196126, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12485140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407526, "index_size": 41085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390104, "raw_average_key_size": 27, "raw_value_size": 12166059, "raw_average_value_size": 858, "num_data_blocks": 1492, "num_entries": 14168, "num_filter_entries": 14168, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.196499) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12485140 bytes
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.198000) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.2 rd, 160.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 15101, records dropped: 933 output_compression: NoCompression
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.198031) EVENT_LOG_v1 {"time_micros": 1769095128198017, "job": 134, "event": "compaction_finished", "compaction_time_micros": 77749, "compaction_time_cpu_micros": 29530, "output_level": 6, "num_output_files": 1, "total_output_size": 12485140, "num_input_records": 15101, "num_output_records": 14168, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128198899, "job": 134, "event": "table_file_deletion", "file_number": 214}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128202340, "job": 134, "event": "table_file_deletion", "file_number": 212}
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.118352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:18:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:49.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:18:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:50 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:50.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:51 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:51.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:52 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:54 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:54 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:54.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:55.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:55 np0005592157 podman[343949]: 2026-01-22 15:18:55.352806307 +0000 UTC m=+0.087832653 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 10:18:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:55 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:56.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:57 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:57.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:18:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:58 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:58.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:59 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:18:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:59.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:00 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:00.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:02 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:02.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:03 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:03 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:04.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:04 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:04 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:19:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:05.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:05 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:06 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:09 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:09 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:10 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:10.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:11 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:12 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:12.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:13 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:13 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:14 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:15.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:15 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:16.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:16 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:17 np0005592157 podman[344037]: 2026-01-22 15:19:17.337237147 +0000 UTC m=+0.064863996 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:19:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:19:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:19:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:18.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:19 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:19.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:20 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:19:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:21.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e395ff2c-3e7c-4f71-9f37-cdad6f709585 does not exist
Jan 22 10:19:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 43a5c16a-7c82-411a-981c-8c7987a9cd94 does not exist
Jan 22 10:19:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0a602b70-aef9-4aca-9f94-0ed199dea295 does not exist
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:19:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:22.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:19:22 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:22.992226487 +0000 UTC m=+0.020122629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.164476309 +0000 UTC m=+0.192372471 container create f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:19:23 np0005592157 systemd[1]: Started libpod-conmon-f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6.scope.
Jan 22 10:19:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.250746254 +0000 UTC m=+0.278642396 container init f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.260123681 +0000 UTC m=+0.288019803 container start f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:19:23 np0005592157 relaxed_archimedes[344344]: 167 167
Jan 22 10:19:23 np0005592157 systemd[1]: libpod-f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6.scope: Deactivated successfully.
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.266302951 +0000 UTC m=+0.294199063 container attach f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.266877185 +0000 UTC m=+0.294773307 container died f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:19:23 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d550417260723cc15f51c96f9c0e4e8a9e5ba9a216a1e7aafdd08c3e0fd93cd1-merged.mount: Deactivated successfully.
Jan 22 10:19:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:23.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:23 np0005592157 podman[344328]: 2026-01-22 15:19:23.427776972 +0000 UTC m=+0.455673094 container remove f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:19:23 np0005592157 systemd[1]: libpod-conmon-f12f121b0014fcbf8749a91e8a7174ba196b33fc17aa3d70240ac0ca209847b6.scope: Deactivated successfully.
Jan 22 10:19:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:23 np0005592157 podman[344367]: 2026-01-22 15:19:23.583135554 +0000 UTC m=+0.047862383 container create e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:19:23 np0005592157 systemd[1]: Started libpod-conmon-e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b.scope.
Jan 22 10:19:23 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:23 np0005592157 podman[344367]: 2026-01-22 15:19:23.556688652 +0000 UTC m=+0.021415511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:23 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:23 np0005592157 podman[344367]: 2026-01-22 15:19:23.846754114 +0000 UTC m=+0.311480983 container init e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 10:19:23 np0005592157 podman[344367]: 2026-01-22 15:19:23.853896168 +0000 UTC m=+0.318623017 container start e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:19:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:24 np0005592157 youthful_beaver[344383]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:19:24 np0005592157 youthful_beaver[344383]: --> relative data size: 1.0
Jan 22 10:19:24 np0005592157 youthful_beaver[344383]: --> All data devices are unavailable
Jan 22 10:19:24 np0005592157 systemd[1]: libpod-e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b.scope: Deactivated successfully.
Jan 22 10:19:24 np0005592157 podman[344367]: 2026-01-22 15:19:24.70410718 +0000 UTC m=+1.168834019 container attach e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:19:24 np0005592157 podman[344367]: 2026-01-22 15:19:24.705632367 +0000 UTC m=+1.170359206 container died e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:19:25 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:25 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0f61598b82581c398a768bb9e50fb43f146a1e5715a31df821a1975a3ddc7035-merged.mount: Deactivated successfully.
Jan 22 10:19:25 np0005592157 podman[344367]: 2026-01-22 15:19:25.15840189 +0000 UTC m=+1.623128729 container remove e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:19:25 np0005592157 systemd[1]: libpod-conmon-e950c7908dbbc767ae40a2b60ca718a0436bd796b0f795522f80ad7aa2be152b.scope: Deactivated successfully.
Jan 22 10:19:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:25.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:25 np0005592157 podman[344510]: 2026-01-22 15:19:25.515035439 +0000 UTC m=+0.080497775 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 10:19:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:25 np0005592157 podman[344577]: 2026-01-22 15:19:25.713045277 +0000 UTC m=+0.020040008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:25 np0005592157 podman[344577]: 2026-01-22 15:19:25.853038605 +0000 UTC m=+0.160033316 container create 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:19:25 np0005592157 systemd[1]: Started libpod-conmon-40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a.scope.
Jan 22 10:19:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:26 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592157 podman[344577]: 2026-01-22 15:19:26.127012208 +0000 UTC m=+0.434006939 container init 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:19:26 np0005592157 podman[344577]: 2026-01-22 15:19:26.132838319 +0000 UTC m=+0.439833030 container start 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:19:26 np0005592157 tender_mclean[344595]: 167 167
Jan 22 10:19:26 np0005592157 systemd[1]: libpod-40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a.scope: Deactivated successfully.
Jan 22 10:19:26 np0005592157 podman[344577]: 2026-01-22 15:19:26.156381771 +0000 UTC m=+0.463376482 container attach 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:19:26 np0005592157 podman[344577]: 2026-01-22 15:19:26.158251486 +0000 UTC m=+0.465246197 container died 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:19:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d7ce2ee8227a3c49c694dac0fc1460b306f55a03797ea86821a8ea4c27f87c5b-merged.mount: Deactivated successfully.
Jan 22 10:19:26 np0005592157 podman[344577]: 2026-01-22 15:19:26.277975243 +0000 UTC m=+0.584969954 container remove 40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:19:26 np0005592157 systemd[1]: libpod-conmon-40ec4103db1fae03fc78d8a3de2b75f6cb6abbcd64ba8144ccb15a633a919d2a.scope: Deactivated successfully.
Jan 22 10:19:26 np0005592157 podman[344621]: 2026-01-22 15:19:26.432533596 +0000 UTC m=+0.025677635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:26 np0005592157 podman[344621]: 2026-01-22 15:19:26.563319741 +0000 UTC m=+0.156463760 container create 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:19:26 np0005592157 systemd[1]: Started libpod-conmon-16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33.scope.
Jan 22 10:19:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5eb7470c7afac10cadd41df873354c85a55d35198ad6f5f088ac65560fe3f8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5eb7470c7afac10cadd41df873354c85a55d35198ad6f5f088ac65560fe3f8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5eb7470c7afac10cadd41df873354c85a55d35198ad6f5f088ac65560fe3f8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:26 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5eb7470c7afac10cadd41df873354c85a55d35198ad6f5f088ac65560fe3f8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:26 np0005592157 podman[344621]: 2026-01-22 15:19:26.647481334 +0000 UTC m=+0.240625363 container init 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:19:26 np0005592157 podman[344621]: 2026-01-22 15:19:26.65636568 +0000 UTC m=+0.249509699 container start 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:19:26 np0005592157 podman[344621]: 2026-01-22 15:19:26.665973283 +0000 UTC m=+0.259117352 container attach 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 10:19:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]: {
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:    "0": [
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:        {
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "devices": [
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "/dev/loop3"
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            ],
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "lv_name": "ceph_lv0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "lv_size": "7511998464",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "name": "ceph_lv0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "tags": {
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.cluster_name": "ceph",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.crush_device_class": "",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.encrypted": "0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.osd_id": "0",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.type": "block",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:                "ceph.vdo": "0"
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            },
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "type": "block",
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:            "vg_name": "ceph_vg0"
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:        }
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]:    ]
Jan 22 10:19:27 np0005592157 eager_wozniak[344686]: }
Jan 22 10:19:27 np0005592157 systemd[1]: libpod-16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33.scope: Deactivated successfully.
Jan 22 10:19:27 np0005592157 podman[344621]: 2026-01-22 15:19:27.436704966 +0000 UTC m=+1.029849005 container died 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:19:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:27 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f5eb7470c7afac10cadd41df873354c85a55d35198ad6f5f088ac65560fe3f8f-merged.mount: Deactivated successfully.
Jan 22 10:19:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:28 np0005592157 podman[344621]: 2026-01-22 15:19:28.640262337 +0000 UTC m=+2.233406356 container remove 16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 10:19:28 np0005592157 systemd[1]: libpod-conmon-16cb226ce93cb33f6795199d4ad1ed10396089986d5cfed49604aefcc733fc33.scope: Deactivated successfully.
Jan 22 10:19:29 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:29 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.229514944 +0000 UTC m=+0.022811475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.326019407 +0000 UTC m=+0.119315918 container create 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:19:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:29.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:29 np0005592157 systemd[1]: Started libpod-conmon-43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d.scope.
Jan 22 10:19:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.448834849 +0000 UTC m=+0.242131380 container init 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.455819888 +0000 UTC m=+0.249116409 container start 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:19:29 np0005592157 keen_robinson[344864]: 167 167
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.460336208 +0000 UTC m=+0.253632729 container attach 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 10:19:29 np0005592157 systemd[1]: libpod-43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d.scope: Deactivated successfully.
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.461829434 +0000 UTC m=+0.255125955 container died 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:19:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ae49292e1de3d64fe1f7701453f7af2072b7dd2b74c7c3f74e5b0f97c34b8e6c-merged.mount: Deactivated successfully.
Jan 22 10:19:29 np0005592157 podman[344848]: 2026-01-22 15:19:29.545512436 +0000 UTC m=+0.338808947 container remove 43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:19:29 np0005592157 systemd[1]: libpod-conmon-43a6584b2da44ecd266883d65af66e2f3b1dfe286758eca482ee30e5e78f333d.scope: Deactivated successfully.
Jan 22 10:19:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:29 np0005592157 podman[344889]: 2026-01-22 15:19:29.696983084 +0000 UTC m=+0.040971316 container create 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:19:29 np0005592157 systemd[1]: Started libpod-conmon-90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372.scope.
Jan 22 10:19:29 np0005592157 podman[344889]: 2026-01-22 15:19:29.678169797 +0000 UTC m=+0.022158059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:19:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:19:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8eabf457c114622a94454ace678103fde60ce8ca3f963bf647bf142eee7f767/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8eabf457c114622a94454ace678103fde60ce8ca3f963bf647bf142eee7f767/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8eabf457c114622a94454ace678103fde60ce8ca3f963bf647bf142eee7f767/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8eabf457c114622a94454ace678103fde60ce8ca3f963bf647bf142eee7f767/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:19:29 np0005592157 podman[344889]: 2026-01-22 15:19:29.974217655 +0000 UTC m=+0.318205947 container init 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:19:29 np0005592157 podman[344889]: 2026-01-22 15:19:29.987231291 +0000 UTC m=+0.331219533 container start 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 10:19:30 np0005592157 podman[344889]: 2026-01-22 15:19:30.571691451 +0000 UTC m=+0.915679713 container attach 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:19:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:30 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]: {
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:        "osd_id": 0,
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:        "type": "bluestore"
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]:    }
Jan 22 10:19:30 np0005592157 serene_chandrasekhar[344907]: }
Jan 22 10:19:30 np0005592157 systemd[1]: libpod-90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372.scope: Deactivated successfully.
Jan 22 10:19:30 np0005592157 podman[344889]: 2026-01-22 15:19:30.820484302 +0000 UTC m=+1.164472534 container died 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:19:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d8eabf457c114622a94454ace678103fde60ce8ca3f963bf647bf142eee7f767-merged.mount: Deactivated successfully.
Jan 22 10:19:30 np0005592157 podman[344889]: 2026-01-22 15:19:30.9205169 +0000 UTC m=+1.264505132 container remove 90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:19:30 np0005592157 systemd[1]: libpod-conmon-90c565e041109e0eaa81a86b47b6fe6506bc3ffc874f2b690ed0c2ed083e9372.scope: Deactivated successfully.
Jan 22 10:19:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:19:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:19:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 020a131f-2917-4250-ad34-40aeac66e657 does not exist
Jan 22 10:19:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f2db42b7-97ea-4a30-9770-67e1803b5778 does not exist
Jan 22 10:19:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4f344597-360c-4b5b-99c0-8f4faf54db03 does not exist
Jan 22 10:19:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:31.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:31 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:33 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:33 np0005592157 ceph-mon[74359]: Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:34 np0005592157 ceph-mon[74359]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:35 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:36 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:37 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:38 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:38 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:40 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:42 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:43 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:44.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:44 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:45 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:19:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:19:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:46.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:47 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:47 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:19:47
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control']
Jan 22 10:19:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:19:47.655 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:19:47.656 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:19:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:19:47.656 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:19:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:48 np0005592157 podman[345051]: 2026-01-22 15:19:48.339151101 +0000 UTC m=+0.078893997 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:19:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:48 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:49.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:49 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:51 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:52 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:53 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:53 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.221503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195221550, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1091, "num_deletes": 362, "total_data_size": 1314311, "memory_usage": 1336120, "flush_reason": "Manual Compaction"}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195297630, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 1293702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95100, "largest_seqno": 96190, "table_properties": {"data_size": 1288682, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15155, "raw_average_key_size": 22, "raw_value_size": 1277050, "raw_average_value_size": 1864, "num_data_blocks": 95, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 362, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095128, "oldest_key_time": 1769095128, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 76197 microseconds, and 4356 cpu microseconds.
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.297697) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 1293702 bytes OK
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.297717) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.343044) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.343101) EVENT_LOG_v1 {"time_micros": 1769095195343090, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.343128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1308676, prev total WAL file size 1311073, number of live WAL files 2.
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.343878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(1263KB)], [215(11MB)]
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195344005, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 13778842, "oldest_snapshot_seqno": -1}
Jan 22 10:19:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:19:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 14114 keys, 12015514 bytes, temperature: kUnknown
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195578806, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 12015514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938389, "index_size": 40724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 389251, "raw_average_key_size": 27, "raw_value_size": 11697967, "raw_average_value_size": 828, "num_data_blocks": 1475, "num_entries": 14114, "num_filter_entries": 14114, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.579079) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12015514 bytes
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.613348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.7 rd, 51.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.9) write-amplify(9.3) OK, records in: 14853, records dropped: 739 output_compression: NoCompression
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.613395) EVENT_LOG_v1 {"time_micros": 1769095195613377, "job": 136, "event": "compaction_finished", "compaction_time_micros": 234880, "compaction_time_cpu_micros": 29732, "output_level": 6, "num_output_files": 1, "total_output_size": 12015514, "num_input_records": 14853, "num_output_records": 14114, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195614021, "job": 136, "event": "table_file_deletion", "file_number": 217}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195616426, "job": 136, "event": "table_file_deletion", "file_number": 215}
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.343765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.616541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.616547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.616549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.616550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:19:55.616552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:56 np0005592157 podman[345075]: 2026-01-22 15:19:56.332980103 +0000 UTC m=+0.072710836 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:19:56 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:19:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:58 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:58.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:59 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:59 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:19:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:19:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:19:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:00.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:01 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:01 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:02 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:03.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:03 np0005592157 ceph-mon[74359]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:03 np0005592157 ceph-mon[74359]: Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:05 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:20:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:06 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:07 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:08 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592157 ceph-mon[74359]: Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:08.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:10 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:10.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:11 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:12.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:13 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:13 np0005592157 ceph-mon[74359]: Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:14.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:15 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:15.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:16 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:16.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:17 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:20:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:20:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:19 np0005592157 podman[345162]: 2026-01-22 15:20:19.324709224 +0000 UTC m=+0.057696142 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:20:19 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:19 np0005592157 ceph-mon[74359]: Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:21 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:22.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:24 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:24.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:25.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:25 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:26 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592157 ceph-mon[74359]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:26.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:26 np0005592157 podman[345208]: 2026-01-22 15:20:26.994122911 +0000 UTC m=+0.088355267 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:20:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:27.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:27 np0005592157 ceph-mon[74359]: Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:27 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:29 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:30 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:20:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 518aa667-22b1-46f0-82b3-95d23411e99a does not exist
Jan 22 10:20:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 69a4e1c5-6209-4426-81c7-442361fbc9bb does not exist
Jan 22 10:20:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f61eca1e-5360-46bb-be18-61179f412eab does not exist
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:20:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:34.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:20:35 np0005592157 podman[345539]: 2026-01-22 15:20:35.236106328 +0000 UTC m=+0.023068391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:35 np0005592157 podman[345539]: 2026-01-22 15:20:35.70041619 +0000 UTC m=+0.487378233 container create f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:20:35 np0005592157 systemd[1]: Started libpod-conmon-f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24.scope.
Jan 22 10:20:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:36 np0005592157 podman[345539]: 2026-01-22 15:20:36.022570622 +0000 UTC m=+0.809532685 container init f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:20:36 np0005592157 podman[345539]: 2026-01-22 15:20:36.03155914 +0000 UTC m=+0.818521183 container start f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:20:36 np0005592157 goofy_poitras[345556]: 167 167
Jan 22 10:20:36 np0005592157 systemd[1]: libpod-f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24.scope: Deactivated successfully.
Jan 22 10:20:36 np0005592157 podman[345539]: 2026-01-22 15:20:36.149621027 +0000 UTC m=+0.936583070 container attach f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:20:36 np0005592157 podman[345539]: 2026-01-22 15:20:36.150384075 +0000 UTC m=+0.937346158 container died f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:20:36 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9f6ff7905c7abd7a77fd0cc9362cf255a0ede565d2711381be1d05118649d68f-merged.mount: Deactivated successfully.
Jan 22 10:20:36 np0005592157 podman[345539]: 2026-01-22 15:20:36.347392318 +0000 UTC m=+1.134354361 container remove f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:20:36 np0005592157 systemd[1]: libpod-conmon-f9e52cc4c13156d6000bd4a76089af31a9ab32c4083fc15db64511851f4e1e24.scope: Deactivated successfully.
Jan 22 10:20:36 np0005592157 podman[345583]: 2026-01-22 15:20:36.492168903 +0000 UTC m=+0.023559433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:36 np0005592157 podman[345583]: 2026-01-22 15:20:36.659132187 +0000 UTC m=+0.190522697 container create 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:20:36 np0005592157 systemd[1]: Started libpod-conmon-0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345.scope.
Jan 22 10:20:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:36.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:36 np0005592157 podman[345583]: 2026-01-22 15:20:36.794230037 +0000 UTC m=+0.325620567 container init 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:20:36 np0005592157 podman[345583]: 2026-01-22 15:20:36.801483533 +0000 UTC m=+0.332874043 container start 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:20:36 np0005592157 podman[345583]: 2026-01-22 15:20:36.80956884 +0000 UTC m=+0.340959380 container attach 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:20:37 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:37 np0005592157 affectionate_ptolemy[345600]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:20:37 np0005592157 affectionate_ptolemy[345600]: --> relative data size: 1.0
Jan 22 10:20:37 np0005592157 affectionate_ptolemy[345600]: --> All data devices are unavailable
Jan 22 10:20:37 np0005592157 systemd[1]: libpod-0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345.scope: Deactivated successfully.
Jan 22 10:20:37 np0005592157 podman[345583]: 2026-01-22 15:20:37.655282993 +0000 UTC m=+1.186673503 container died 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:20:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ba17f9a5ff8875d4ece7a751d13c88982086414ddbae99f6da3f4ec4b14ac349-merged.mount: Deactivated successfully.
Jan 22 10:20:37 np0005592157 podman[345583]: 2026-01-22 15:20:37.706199749 +0000 UTC m=+1.237590259 container remove 0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:20:37 np0005592157 systemd[1]: libpod-conmon-0e55c3dfb20763ff14259d8ca5accafc1c0957e8d051c0beab0c4fb45d6fb345.scope: Deactivated successfully.
Jan 22 10:20:38 np0005592157 podman[345770]: 2026-01-22 15:20:38.26145262 +0000 UTC m=+0.034976990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:38 np0005592157 podman[345770]: 2026-01-22 15:20:38.724836011 +0000 UTC m=+0.498360291 container create 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:20:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:38 np0005592157 systemd[1]: Started libpod-conmon-777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892.scope.
Jan 22 10:20:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:38 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592157 podman[345770]: 2026-01-22 15:20:39.161605355 +0000 UTC m=+0.935129655 container init 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:20:39 np0005592157 podman[345770]: 2026-01-22 15:20:39.170655635 +0000 UTC m=+0.944179925 container start 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 10:20:39 np0005592157 kind_poincare[345786]: 167 167
Jan 22 10:20:39 np0005592157 systemd[1]: libpod-777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892.scope: Deactivated successfully.
Jan 22 10:20:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:39 np0005592157 podman[345770]: 2026-01-22 15:20:39.510302571 +0000 UTC m=+1.283826951 container attach 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:20:39 np0005592157 podman[345770]: 2026-01-22 15:20:39.511567782 +0000 UTC m=+1.285092102 container died 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:20:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d14b816920ce7a75d3b32326ed8bd8f5345f620ecb5c228ea38d7c10aeb83eb6-merged.mount: Deactivated successfully.
Jan 22 10:20:39 np0005592157 podman[345770]: 2026-01-22 15:20:39.898522207 +0000 UTC m=+1.672046487 container remove 777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 10:20:39 np0005592157 systemd[1]: libpod-conmon-777e70c9a9f338d655835dbfba21da23c2855c57bbd40937b88315b73b82a892.scope: Deactivated successfully.
Jan 22 10:20:39 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:40 np0005592157 podman[345812]: 2026-01-22 15:20:40.035452641 +0000 UTC m=+0.023742227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:40 np0005592157 podman[345812]: 2026-01-22 15:20:40.40116336 +0000 UTC m=+0.389452916 container create 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:20:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:40 np0005592157 systemd[1]: Started libpod-conmon-1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2.scope.
Jan 22 10:20:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a105dfbb78ce779a270e9b4041937845f8dec7debc72bcc5c2d5c304b72134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a105dfbb78ce779a270e9b4041937845f8dec7debc72bcc5c2d5c304b72134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a105dfbb78ce779a270e9b4041937845f8dec7debc72bcc5c2d5c304b72134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a105dfbb78ce779a270e9b4041937845f8dec7debc72bcc5c2d5c304b72134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:41 np0005592157 podman[345812]: 2026-01-22 15:20:41.274030774 +0000 UTC m=+1.262320360 container init 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:20:41 np0005592157 podman[345812]: 2026-01-22 15:20:41.281044624 +0000 UTC m=+1.269334190 container start 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:20:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:41.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:41 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:41 np0005592157 podman[345812]: 2026-01-22 15:20:41.740639733 +0000 UTC m=+1.728929299 container attach 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]: {
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:    "0": [
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:        {
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "devices": [
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "/dev/loop3"
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            ],
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "lv_name": "ceph_lv0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "lv_size": "7511998464",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "name": "ceph_lv0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "tags": {
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.cluster_name": "ceph",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.crush_device_class": "",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.encrypted": "0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.osd_id": "0",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.type": "block",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:                "ceph.vdo": "0"
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            },
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "type": "block",
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:            "vg_name": "ceph_vg0"
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:        }
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]:    ]
Jan 22 10:20:42 np0005592157 priceless_mirzakhani[345830]: }
Jan 22 10:20:42 np0005592157 systemd[1]: libpod-1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2.scope: Deactivated successfully.
Jan 22 10:20:42 np0005592157 podman[345812]: 2026-01-22 15:20:42.11612776 +0000 UTC m=+2.104417316 container died 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:20:42 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f4a105dfbb78ce779a270e9b4041937845f8dec7debc72bcc5c2d5c304b72134-merged.mount: Deactivated successfully.
Jan 22 10:20:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:44.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:45.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:46 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:20:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:20:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:46.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:47 np0005592157 podman[345812]: 2026-01-22 15:20:47.018122546 +0000 UTC m=+7.006412102 container remove 1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mirzakhani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:20:47 np0005592157 systemd[1]: libpod-conmon-1e7112b8febb52bb0d8cd1957afab479bc5b32c7d42a6fa0a169e782c11680d2.scope: Deactivated successfully.
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:20:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:47.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:20:47
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes']
Jan 22 10:20:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:20:47.655 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:20:47.657 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:20:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:20:47.657 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:20:47 np0005592157 podman[346046]: 2026-01-22 15:20:47.612559169 +0000 UTC m=+0.023520162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:48 np0005592157 podman[346046]: 2026-01-22 15:20:48.030971367 +0000 UTC m=+0.441932350 container create b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:20:48 np0005592157 systemd[1]: Started libpod-conmon-b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103.scope.
Jan 22 10:20:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:48.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:49 np0005592157 podman[346046]: 2026-01-22 15:20:49.212985236 +0000 UTC m=+1.623946199 container init b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:20:49 np0005592157 podman[346046]: 2026-01-22 15:20:49.223787128 +0000 UTC m=+1.634748081 container start b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:20:49 np0005592157 busy_cerf[346064]: 167 167
Jan 22 10:20:49 np0005592157 systemd[1]: libpod-b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103.scope: Deactivated successfully.
Jan 22 10:20:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:49 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:49 np0005592157 podman[346046]: 2026-01-22 15:20:49.497483463 +0000 UTC m=+1.908444406 container attach b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:20:49 np0005592157 podman[346046]: 2026-01-22 15:20:49.498814686 +0000 UTC m=+1.909775639 container died b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:20:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-768167bc9d926339e498ae2fe97fc5326f7eef434c9131564eac7972cd6c4120-merged.mount: Deactivated successfully.
Jan 22 10:20:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:50.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:51 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592157 podman[346046]: 2026-01-22 15:20:51.068095416 +0000 UTC m=+3.479056349 container remove b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:20:51 np0005592157 systemd[1]: libpod-conmon-b7167f7203de051bae6d194cdf81c33ded3ca78e2acbe33ee471c8ff11259103.scope: Deactivated successfully.
Jan 22 10:20:51 np0005592157 podman[346083]: 2026-01-22 15:20:51.144496821 +0000 UTC m=+1.195090557 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:20:51 np0005592157 podman[346108]: 2026-01-22 15:20:51.218322564 +0000 UTC m=+0.023893301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:20:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:51.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:52 np0005592157 podman[346108]: 2026-01-22 15:20:52.39241819 +0000 UTC m=+1.197988917 container create ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:20:52 np0005592157 systemd[1]: Started libpod-conmon-ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467.scope.
Jan 22 10:20:52 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:52 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:20:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a7399c28b390803f86b2dd266e4d2ec3cf4f175970c237cb3bbafac1865ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a7399c28b390803f86b2dd266e4d2ec3cf4f175970c237cb3bbafac1865ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a7399c28b390803f86b2dd266e4d2ec3cf4f175970c237cb3bbafac1865ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a7399c28b390803f86b2dd266e4d2ec3cf4f175970c237cb3bbafac1865ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:20:52 np0005592157 podman[346108]: 2026-01-22 15:20:52.748234869 +0000 UTC m=+1.553805616 container init ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:20:52 np0005592157 podman[346108]: 2026-01-22 15:20:52.755590718 +0000 UTC m=+1.561161425 container start ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 22 10:20:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:52 np0005592157 podman[346108]: 2026-01-22 15:20:52.903039068 +0000 UTC m=+1.708609785 container attach ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:20:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]: {
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:        "osd_id": 0,
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:        "type": "bluestore"
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]:    }
Jan 22 10:20:53 np0005592157 objective_leavitt[346125]: }
Jan 22 10:20:53 np0005592157 systemd[1]: libpod-ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467.scope: Deactivated successfully.
Jan 22 10:20:53 np0005592157 podman[346108]: 2026-01-22 15:20:53.567434908 +0000 UTC m=+2.373005625 container died ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:20:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-98a7399c28b390803f86b2dd266e4d2ec3cf4f175970c237cb3bbafac1865ca9-merged.mount: Deactivated successfully.
Jan 22 10:20:54 np0005592157 podman[346108]: 2026-01-22 15:20:54.609385897 +0000 UTC m=+3.414956614 container remove ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:20:54 np0005592157 systemd[1]: libpod-conmon-ccd61aa6920ae82d8e4b0b77500ad359086c5505c05e97e3a29c95a2afd51467.scope: Deactivated successfully.
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:20:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:54.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bc3117b6-ff6a-458f-9074-1d0a1bedecde does not exist
Jan 22 10:20:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 18f54f1e-02f0-4a5f-9f47-0518d9ad186c does not exist
Jan 22 10:20:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dee93080-19b6-4b23-8d57-b8a6ecec07b2 does not exist
Jan 22 10:20:55 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:56 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:56 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:57 np0005592157 podman[346212]: 2026-01-22 15:20:57.350901458 +0000 UTC m=+0.089315080 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:20:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:58 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:58 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:20:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:58.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:20:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:20:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:20:59 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:20:59 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:01 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:02 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:02 np0005592157 ceph-mon[74359]: Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:02.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:03 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:04 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:21:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:05 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:06 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:06.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:07.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:07 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592157 ceph-mon[74359]: Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:08.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:08 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:09.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:10 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:11 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:12 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:13 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592157 ceph-mon[74359]: Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:13 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:14 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:15.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:16 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:16.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:17 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:17.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:18 np0005592157 ceph-mon[74359]: Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:18 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:19.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:19 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:20.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:20 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:21 np0005592157 podman[346301]: 2026-01-22 15:21:21.340733262 +0000 UTC m=+0.080132727 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:21:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:21.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:21 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:22 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592157 ceph-mon[74359]: Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:24 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:25 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:27 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:27 np0005592157 podman[346373]: 2026-01-22 15:21:27.500150757 +0000 UTC m=+0.090822876 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 10:21:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:28 np0005592157 ceph-mon[74359]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:28 np0005592157 ceph-mon[74359]: Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:29 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:29.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:30.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:30 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:30 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:32 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:33 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:33 np0005592157 ceph-mon[74359]: Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:34 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:34.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:35 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:36 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:37 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:38 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:38 np0005592157 ceph-mon[74359]: Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:39 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:40 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:40 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:41 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:42.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:43 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:43 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:43.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:44 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:45 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:21:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:21:46 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:21:47
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', 'volumes']
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:21:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:47.656 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:47.657 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:21:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:47.657 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:21:47 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:48 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:49 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:50 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:52 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:52 np0005592157 podman[346463]: 2026-01-22 15:21:52.309782124 +0000 UTC m=+0.042756230 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:21:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:53 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:53 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:54 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:55 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:55.602 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:21:55 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:55.605 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:21:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:21:56 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5327fec3-d953-4813-a942-1a04a6c8d611 does not exist
Jan 22 10:21:56 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9f4d54b5-17a4-4665-9efe-1015aff63be6 does not exist
Jan 22 10:21:56 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4702913e-859f-4988-925d-f21de2bce6bf does not exist
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:56 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:21:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:56.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:56 np0005592157 podman[346757]: 2026-01-22 15:21:56.849601537 +0000 UTC m=+0.021938683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 20K writes, 97K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1706 writes, 8860 keys, 1706 commit groups, 1.0 writes per commit group, ingest: 10.70 MB, 0.02 MB/s#012Interval WAL: 1707 writes, 1707 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     64.3      1.74              0.48        68    0.026       0      0       0.0       0.0#012  L6      1/0   11.46 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.8    112.3     97.2      6.69              2.41        67    0.100    697K    38K       0.0       0.0#012 Sum      1/0   11.46 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.8     89.1     90.4      8.43              2.89       135    0.062    697K    38K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     90.6     91.4      0.85              0.22        12    0.071     89K   4959       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    112.3     97.2      6.69              2.41        67    0.100    697K    38K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     64.4      1.74              0.48        67    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.109, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.74 GB write, 0.12 MB/s write, 0.73 GB read, 0.11 MB/s read, 8.4 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 80.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000564 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4264,76.38 MB,25.1266%) FilterBlock(136,1.89 MB,0.622935%) IndexBlock(136,2.36 MB,0.777752%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:21:57 np0005592157 podman[346757]: 2026-01-22 15:21:57.03341295 +0000 UTC m=+0.205750066 container create d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:21:57 np0005592157 systemd[1]: Started libpod-conmon-d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138.scope.
Jan 22 10:21:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:21:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:21:57 np0005592157 podman[346757]: 2026-01-22 15:21:57.567243621 +0000 UTC m=+0.739580777 container init d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:21:57 np0005592157 podman[346757]: 2026-01-22 15:21:57.575654656 +0000 UTC m=+0.747991792 container start d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 10:21:57 np0005592157 stupefied_jepsen[346773]: 167 167
Jan 22 10:21:57 np0005592157 systemd[1]: libpod-d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138.scope: Deactivated successfully.
Jan 22 10:21:57 np0005592157 conmon[346773]: conmon d9891b5a3110bac7a4c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138.scope/container/memory.events
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:21:57 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:21:57 np0005592157 podman[346757]: 2026-01-22 15:21:57.785279595 +0000 UTC m=+0.957616731 container attach d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:21:57 np0005592157 podman[346757]: 2026-01-22 15:21:57.786359742 +0000 UTC m=+0.958696868 container died d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 22 10:21:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d344d263eb5c701365caaffcb211b04c741730afabc6349c8a34c8f37763f8bd-merged.mount: Deactivated successfully.
Jan 22 10:21:58 np0005592157 podman[346757]: 2026-01-22 15:21:58.032250871 +0000 UTC m=+1.204587987 container remove d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jepsen, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:21:58 np0005592157 systemd[1]: libpod-conmon-d9891b5a3110bac7a4c9472b71050061b3b7869328a0d5a0425316e4b6551138.scope: Deactivated successfully.
Jan 22 10:21:58 np0005592157 podman[346778]: 2026-01-22 15:21:58.146365242 +0000 UTC m=+0.533564896 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 10:21:58 np0005592157 podman[346823]: 2026-01-22 15:21:58.198653741 +0000 UTC m=+0.046069259 container create 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:21:58 np0005592157 systemd[1]: Started libpod-conmon-7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13.scope.
Jan 22 10:21:58 np0005592157 podman[346823]: 2026-01-22 15:21:58.177768894 +0000 UTC m=+0.025184432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:21:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:21:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:21:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:21:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:21:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:21:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:21:58 np0005592157 podman[346823]: 2026-01-22 15:21:58.351879102 +0000 UTC m=+0.199294640 container init 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:21:58 np0005592157 podman[346823]: 2026-01-22 15:21:58.360577633 +0000 UTC m=+0.207993151 container start 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:21:58 np0005592157 podman[346823]: 2026-01-22 15:21:58.371068647 +0000 UTC m=+0.218484165 container attach 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:21:58 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:58 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:59 np0005592157 kind_hugle[346841]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:21:59 np0005592157 kind_hugle[346841]: --> relative data size: 1.0
Jan 22 10:21:59 np0005592157 kind_hugle[346841]: --> All data devices are unavailable
Jan 22 10:21:59 np0005592157 systemd[1]: libpod-7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13.scope: Deactivated successfully.
Jan 22 10:21:59 np0005592157 podman[346823]: 2026-01-22 15:21:59.205005145 +0000 UTC m=+1.052420663 container died 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:21:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:21:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:21:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:21:59 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:21:59.608 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:21:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:00 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7272754eb1f4166bb8eaf809d7e991482c2908eaca8cf2f335869145e7b8efda-merged.mount: Deactivated successfully.
Jan 22 10:22:00 np0005592157 podman[346823]: 2026-01-22 15:22:00.235154296 +0000 UTC m=+2.082569854 container remove 7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:22:00 np0005592157 systemd[1]: libpod-conmon-7cc225fced1525cf74663d77b0774b3a580e5afa9583cf5daf469a60eef16d13.scope: Deactivated successfully.
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.836668601 +0000 UTC m=+0.062711584 container create ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:22:00 np0005592157 systemd[1]: Started libpod-conmon-ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d.scope.
Jan 22 10:22:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.791821532 +0000 UTC m=+0.017864535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:22:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.931187606 +0000 UTC m=+0.157230629 container init ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.937854388 +0000 UTC m=+0.163897371 container start ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.941863535 +0000 UTC m=+0.167906518 container attach ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:22:00 np0005592157 systemd[1]: libpod-ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d.scope: Deactivated successfully.
Jan 22 10:22:00 np0005592157 recursing_kilby[347025]: 167 167
Jan 22 10:22:00 np0005592157 conmon[347025]: conmon ba890e3068a7ab0419d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d.scope/container/memory.events
Jan 22 10:22:00 np0005592157 podman[347009]: 2026-01-22 15:22:00.943322571 +0000 UTC m=+0.169365554 container died ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:22:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fdab6b0d38f1042e2d3fd2ae1923bd37bc3a5dd09e4f4e3ff58849354d585679-merged.mount: Deactivated successfully.
Jan 22 10:22:01 np0005592157 podman[347009]: 2026-01-22 15:22:01.003184764 +0000 UTC m=+0.229227747 container remove ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:22:01 np0005592157 systemd[1]: libpod-conmon-ba890e3068a7ab0419d53fb281c8757eb2040086ee4540dcb7e82497fb6f014d.scope: Deactivated successfully.
Jan 22 10:22:01 np0005592157 podman[347049]: 2026-01-22 15:22:01.175294803 +0000 UTC m=+0.058993464 container create a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:22:01 np0005592157 systemd[1]: Started libpod-conmon-a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c.scope.
Jan 22 10:22:01 np0005592157 podman[347049]: 2026-01-22 15:22:01.14143023 +0000 UTC m=+0.025128911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:22:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:22:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfba44577fea7c0ecba7a4149c356287002bd3cdd69021468a117173bc57214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfba44577fea7c0ecba7a4149c356287002bd3cdd69021468a117173bc57214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfba44577fea7c0ecba7a4149c356287002bd3cdd69021468a117173bc57214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfba44577fea7c0ecba7a4149c356287002bd3cdd69021468a117173bc57214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:01 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:01 np0005592157 podman[347049]: 2026-01-22 15:22:01.284976976 +0000 UTC m=+0.168675637 container init a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:22:01 np0005592157 podman[347049]: 2026-01-22 15:22:01.296350832 +0000 UTC m=+0.180049513 container start a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 22 10:22:01 np0005592157 podman[347049]: 2026-01-22 15:22:01.302539662 +0000 UTC m=+0.186238323 container attach a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:22:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:22:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:22:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:02 np0005592157 competent_swirles[347065]: {
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:    "0": [
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:        {
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "devices": [
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "/dev/loop3"
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            ],
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "lv_name": "ceph_lv0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "lv_size": "7511998464",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "name": "ceph_lv0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "tags": {
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.cluster_name": "ceph",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.crush_device_class": "",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.encrypted": "0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.osd_id": "0",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.type": "block",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:                "ceph.vdo": "0"
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            },
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "type": "block",
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:            "vg_name": "ceph_vg0"
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:        }
Jan 22 10:22:02 np0005592157 competent_swirles[347065]:    ]
Jan 22 10:22:02 np0005592157 competent_swirles[347065]: }
Jan 22 10:22:02 np0005592157 systemd[1]: libpod-a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c.scope: Deactivated successfully.
Jan 22 10:22:02 np0005592157 podman[347049]: 2026-01-22 15:22:02.090740368 +0000 UTC m=+0.974439029 container died a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:22:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bbfba44577fea7c0ecba7a4149c356287002bd3cdd69021468a117173bc57214-merged.mount: Deactivated successfully.
Jan 22 10:22:02 np0005592157 podman[347049]: 2026-01-22 15:22:02.146686226 +0000 UTC m=+1.030384887 container remove a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:22:02 np0005592157 systemd[1]: libpod-conmon-a537a5ae18a6bd27df4494fa42e47926e5b2ab36e83142a24a6933f8fe7d347c.scope: Deactivated successfully.
Jan 22 10:22:02 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:02 np0005592157 podman[347224]: 2026-01-22 15:22:02.699779375 +0000 UTC m=+0.019814272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:22:02 np0005592157 podman[347224]: 2026-01-22 15:22:02.862500476 +0000 UTC m=+0.182535353 container create 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 10:22:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:02.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:03 np0005592157 systemd[1]: Started libpod-conmon-6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a.scope.
Jan 22 10:22:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:22:03 np0005592157 podman[347224]: 2026-01-22 15:22:03.161289471 +0000 UTC m=+0.481324368 container init 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:22:03 np0005592157 podman[347224]: 2026-01-22 15:22:03.167654695 +0000 UTC m=+0.487689572 container start 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:22:03 np0005592157 vigilant_khayyam[347241]: 167 167
Jan 22 10:22:03 np0005592157 systemd[1]: libpod-6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a.scope: Deactivated successfully.
Jan 22 10:22:03 np0005592157 podman[347224]: 2026-01-22 15:22:03.275180316 +0000 UTC m=+0.595215213 container attach 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:22:03 np0005592157 podman[347224]: 2026-01-22 15:22:03.27576995 +0000 UTC m=+0.595804827 container died 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:22:03 np0005592157 ceph-mon[74359]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:03 np0005592157 ceph-mon[74359]: Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:22:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:22:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f073dc10d4b0448c40973b990a3d2c9561f6051f88c94ae7915bcb51be70a4a7-merged.mount: Deactivated successfully.
Jan 22 10:22:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:03 np0005592157 podman[347224]: 2026-01-22 15:22:03.918221698 +0000 UTC m=+1.238256585 container remove 6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:22:03 np0005592157 systemd[1]: libpod-conmon-6060fcd46fb9d34bc6343e09840e447d97dae5a772df660e02df5bc1d9368e9a.scope: Deactivated successfully.
Jan 22 10:22:04 np0005592157 podman[347267]: 2026-01-22 15:22:04.051477304 +0000 UTC m=+0.022005336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:22:04 np0005592157 podman[347267]: 2026-01-22 15:22:04.709256075 +0000 UTC m=+0.679784087 container create 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:22:04 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592157 systemd[1]: Started libpod-conmon-37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b.scope.
Jan 22 10:22:04 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:22:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5824277cce39505fc103a8ea8d2a13853f11e42b6c9d361fc0e46c573fffb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5824277cce39505fc103a8ea8d2a13853f11e42b6c9d361fc0e46c573fffb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5824277cce39505fc103a8ea8d2a13853f11e42b6c9d361fc0e46c573fffb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:04 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5824277cce39505fc103a8ea8d2a13853f11e42b6c9d361fc0e46c573fffb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:22:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:04.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:22:05 np0005592157 podman[347267]: 2026-01-22 15:22:05.225243471 +0000 UTC m=+1.195771573 container init 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:22:05 np0005592157 podman[347267]: 2026-01-22 15:22:05.234119667 +0000 UTC m=+1.204647719 container start 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:22:05 np0005592157 podman[347267]: 2026-01-22 15:22:05.426964289 +0000 UTC m=+1.397492361 container attach 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 10:22:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:06 np0005592157 stoic_cray[347284]: {
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:        "osd_id": 0,
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:        "type": "bluestore"
Jan 22 10:22:06 np0005592157 stoic_cray[347284]:    }
Jan 22 10:22:06 np0005592157 stoic_cray[347284]: }
Jan 22 10:22:06 np0005592157 systemd[1]: libpod-37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b.scope: Deactivated successfully.
Jan 22 10:22:06 np0005592157 podman[347267]: 2026-01-22 15:22:06.124236809 +0000 UTC m=+2.094764841 container died 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:22:06 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6e5824277cce39505fc103a8ea8d2a13853f11e42b6c9d361fc0e46c573fffb8-merged.mount: Deactivated successfully.
Jan 22 10:22:06 np0005592157 podman[347267]: 2026-01-22 15:22:06.84919169 +0000 UTC m=+2.819719702 container remove 37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:22:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:06.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:06 np0005592157 systemd[1]: libpod-conmon-37b9caf65c1dccb88cf670e4aa75d9e957a835c18290e4997017e5f89aab114b.scope: Deactivated successfully.
Jan 22 10:22:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:22:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:06 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:22:06 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c1f557a5-89e0-489a-804e-c9ba81d93aae does not exist
Jan 22 10:22:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 15a89ed5-fc5d-466c-ba9c-cadc96e3c20b does not exist
Jan 22 10:22:06 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f4c231a5-c790-418b-8962-6465855a660f does not exist
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:08 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:08 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:09 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:11 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:12 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:13.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:13 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:13 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:14 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000073s ======
Jan 22 10:22:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000073s
Jan 22 10:22:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:15.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:16 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:16.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:17 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:17.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:18 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:18 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:19.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:19 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:20.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:21 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:21.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:22 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:22.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:23 np0005592157 podman[347428]: 2026-01-22 15:22:23.327780754 +0000 UTC m=+0.067266885 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:22:23 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:23 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:22:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:22:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:24 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:24.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 10:22:25 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:27 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:27.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 10:22:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:28 np0005592157 podman[347498]: 2026-01-22 15:22:28.353186967 +0000 UTC m=+0.085578479 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:22:28 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:28 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:29.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 12 op/s
Jan 22 10:22:29 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:29 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:30.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:31 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:31.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 10:22:32 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592157 ceph-mon[74359]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:33 np0005592157 ceph-mon[74359]: Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:33 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 10:22:34 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:34.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 10:22:35 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:36.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:36 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 10:22:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:38 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:38 np0005592157 ceph-mon[74359]: Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:38.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:39 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:39.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 10:22:40 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 10:22:41 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:42 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592157 ceph-mon[74359]: Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:43 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:44 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:45 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:22:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:22:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:46 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:22:47
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.mgr', 'backups', 'volumes']
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:22:47.658 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:22:47.659 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:22:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:22:47.659 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:22:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:47 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592157 ceph-mon[74359]: Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:48 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:49.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:49 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:50.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:51 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:52 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:53 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592157 ceph-mon[74359]: Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:53.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:54 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:54 np0005592157 podman[347587]: 2026-01-22 15:22:54.315360416 +0000 UTC m=+0.050246181 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:22:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:54.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:55 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:55.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:56 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:57 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:22:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:58 np0005592157 ceph-mon[74359]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:58 np0005592157 ceph-mon[74359]: Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:59 np0005592157 podman[347609]: 2026-01-22 15:22:59.342738717 +0000 UTC m=+0.072335948 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:22:59 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:22:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:22:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:22:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:59.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:22:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.594618) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380594743, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 2569, "num_deletes": 569, "total_data_size": 3534711, "memory_usage": 3602392, "flush_reason": "Manual Compaction"}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380615004, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 3454007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96191, "largest_seqno": 98759, "table_properties": {"data_size": 3443431, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 33366, "raw_average_key_size": 23, "raw_value_size": 3417977, "raw_average_value_size": 2380, "num_data_blocks": 250, "num_entries": 1436, "num_filter_entries": 1436, "num_deletions": 569, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095195, "oldest_key_time": 1769095195, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 20422 microseconds, and 7703 cpu microseconds.
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.615054) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 3454007 bytes OK
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.615075) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.617547) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.617568) EVENT_LOG_v1 {"time_micros": 1769095380617562, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.617589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 3522627, prev total WAL file size 3522627, number of live WAL files 2.
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.618586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(3373KB)], [218(11MB)]
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380618687, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 15469521, "oldest_snapshot_seqno": -1}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 14397 keys, 13609495 bytes, temperature: kUnknown
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380709458, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 13609495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13528817, "index_size": 43573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 394603, "raw_average_key_size": 27, "raw_value_size": 13281868, "raw_average_value_size": 922, "num_data_blocks": 1597, "num_entries": 14397, "num_filter_entries": 14397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.709759) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13609495 bytes
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.712554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.3 rd, 149.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.5 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15550, records dropped: 1153 output_compression: NoCompression
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.712589) EVENT_LOG_v1 {"time_micros": 1769095380712575, "job": 138, "event": "compaction_finished", "compaction_time_micros": 90861, "compaction_time_cpu_micros": 32023, "output_level": 6, "num_output_files": 1, "total_output_size": 13609495, "num_input_records": 15550, "num_output_records": 14397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380713472, "job": 138, "event": "table_file_deletion", "file_number": 220}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380716203, "job": 138, "event": "table_file_deletion", "file_number": 218}
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.618457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.716319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.716326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.716328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.716330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:00.716331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:01 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:01.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:02.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:02 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:04 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:04.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:05 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:23:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:23:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:05.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:23:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:06 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:06 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:06.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:07 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:07.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3ba86a29-3079-4690-ac8f-1462650732f4 does not exist
Jan 22 10:23:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4c8356f7-0dab-4202-b142-81bfdc78ca05 does not exist
Jan 22 10:23:08 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5d1758a4-c87b-4450-a378-77fad3b80151 does not exist
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:08 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.807504875 +0000 UTC m=+0.036006856 container create 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:23:08 np0005592157 systemd[1]: Started libpod-conmon-75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f.scope.
Jan 22 10:23:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.88638889 +0000 UTC m=+0.114890901 container init 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.790914752 +0000 UTC m=+0.019416763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.897741456 +0000 UTC m=+0.126243447 container start 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:23:08 np0005592157 naughty_goodall[347978]: 167 167
Jan 22 10:23:08 np0005592157 systemd[1]: libpod-75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f.scope: Deactivated successfully.
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.92550757 +0000 UTC m=+0.154009581 container attach 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:23:08 np0005592157 podman[347962]: 2026-01-22 15:23:08.92798507 +0000 UTC m=+0.156487051 container died 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:23:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:09.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8d2db1c4e222f22d509288871f4fa1feac377a274e8e64b750d5e1cdef649560-merged.mount: Deactivated successfully.
Jan 22 10:23:09 np0005592157 podman[347962]: 2026-01-22 15:23:09.163471157 +0000 UTC m=+0.391973178 container remove 75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:23:09 np0005592157 systemd[1]: libpod-conmon-75d06a1aac7ac86812214017dc78f1fc115b117d06800333f13ee41845a9445f.scope: Deactivated successfully.
Jan 22 10:23:09 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:09 np0005592157 podman[348003]: 2026-01-22 15:23:09.365018911 +0000 UTC m=+0.039922531 container create f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:23:09 np0005592157 systemd[1]: Started libpod-conmon-f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4.scope.
Jan 22 10:23:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:09 np0005592157 podman[348003]: 2026-01-22 15:23:09.345731692 +0000 UTC m=+0.020635332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:09 np0005592157 podman[348003]: 2026-01-22 15:23:09.448536997 +0000 UTC m=+0.123440627 container init f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:23:09 np0005592157 podman[348003]: 2026-01-22 15:23:09.458377466 +0000 UTC m=+0.133281096 container start f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:23:09 np0005592157 podman[348003]: 2026-01-22 15:23:09.462692351 +0000 UTC m=+0.137595991 container attach f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:23:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:10 np0005592157 focused_swirles[348019]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:23:10 np0005592157 focused_swirles[348019]: --> relative data size: 1.0
Jan 22 10:23:10 np0005592157 focused_swirles[348019]: --> All data devices are unavailable
Jan 22 10:23:10 np0005592157 systemd[1]: libpod-f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4.scope: Deactivated successfully.
Jan 22 10:23:10 np0005592157 podman[348003]: 2026-01-22 15:23:10.31274558 +0000 UTC m=+0.987649210 container died f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:23:10 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b2e80748757a6b83512d5efecb92e55ff4f19b3ba94813c5d958a0171b95ddf5-merged.mount: Deactivated successfully.
Jan 22 10:23:10 np0005592157 podman[348003]: 2026-01-22 15:23:10.373610678 +0000 UTC m=+1.048514298 container remove f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:23:10 np0005592157 systemd[1]: libpod-conmon-f322fe9d115019ff9b6bd6f07fea8b40dbb7443222a55875b615f6b4566a67b4.scope: Deactivated successfully.
Jan 22 10:23:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:11.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.028236662 +0000 UTC m=+0.042927593 container create 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:23:11 np0005592157 systemd[1]: Started libpod-conmon-9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2.scope.
Jan 22 10:23:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.008687547 +0000 UTC m=+0.023378468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.114978428 +0000 UTC m=+0.129669339 container init 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.121003524 +0000 UTC m=+0.135694415 container start 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:23:11 np0005592157 hungry_lichterman[348203]: 167 167
Jan 22 10:23:11 np0005592157 systemd[1]: libpod-9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2.scope: Deactivated successfully.
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.130445023 +0000 UTC m=+0.145135974 container attach 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.131644022 +0000 UTC m=+0.146334923 container died 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:23:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d7e566f7bd000bd68692e20abaf93cd99b214dce140b68e2be637006cce6aab8-merged.mount: Deactivated successfully.
Jan 22 10:23:11 np0005592157 podman[348187]: 2026-01-22 15:23:11.167012711 +0000 UTC m=+0.181703602 container remove 9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:23:11 np0005592157 systemd[1]: libpod-conmon-9382a5573eaed0853146373b608e5d5805de3c4aab45bfeda24ec622e7ef4bb2.scope: Deactivated successfully.
Jan 22 10:23:11 np0005592157 podman[348227]: 2026-01-22 15:23:11.320237161 +0000 UTC m=+0.040524495 container create ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:23:11 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592157 systemd[1]: Started libpod-conmon-ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2.scope.
Jan 22 10:23:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726876964bc8cf7515fda2cda7541621693562bc1907c1f8e26d50bee556464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726876964bc8cf7515fda2cda7541621693562bc1907c1f8e26d50bee556464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726876964bc8cf7515fda2cda7541621693562bc1907c1f8e26d50bee556464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726876964bc8cf7515fda2cda7541621693562bc1907c1f8e26d50bee556464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:11 np0005592157 podman[348227]: 2026-01-22 15:23:11.301020995 +0000 UTC m=+0.021308359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:11 np0005592157 podman[348227]: 2026-01-22 15:23:11.414136261 +0000 UTC m=+0.134423595 container init ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:23:11 np0005592157 podman[348227]: 2026-01-22 15:23:11.419878101 +0000 UTC m=+0.140165435 container start ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:23:11 np0005592157 podman[348227]: 2026-01-22 15:23:11.423585121 +0000 UTC m=+0.143872485 container attach ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:23:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:23:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:23:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:12 np0005592157 frosty_keller[348243]: {
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:    "0": [
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:        {
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "devices": [
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "/dev/loop3"
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            ],
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "lv_name": "ceph_lv0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "lv_size": "7511998464",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "name": "ceph_lv0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "tags": {
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.cluster_name": "ceph",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.crush_device_class": "",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.encrypted": "0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.osd_id": "0",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.type": "block",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:                "ceph.vdo": "0"
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            },
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "type": "block",
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:            "vg_name": "ceph_vg0"
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:        }
Jan 22 10:23:12 np0005592157 frosty_keller[348243]:    ]
Jan 22 10:23:12 np0005592157 frosty_keller[348243]: }
Jan 22 10:23:12 np0005592157 systemd[1]: libpod-ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2.scope: Deactivated successfully.
Jan 22 10:23:12 np0005592157 podman[348227]: 2026-01-22 15:23:12.162691966 +0000 UTC m=+0.882979300 container died ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:23:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d726876964bc8cf7515fda2cda7541621693562bc1907c1f8e26d50bee556464-merged.mount: Deactivated successfully.
Jan 22 10:23:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:12 np0005592157 podman[348227]: 2026-01-22 15:23:12.402023447 +0000 UTC m=+1.122310781 container remove ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:12 np0005592157 systemd[1]: libpod-conmon-ea81d5b9da11ae1391de2103ab5c29ceece0cd8f3bc90c957ba36fa1402104b2.scope: Deactivated successfully.
Jan 22 10:23:12 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:13.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.037183837 +0000 UTC m=+0.072283975 container create 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:13 np0005592157 systemd[1]: Started libpod-conmon-56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a.scope.
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:12.992645917 +0000 UTC m=+0.027746125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.146118302 +0000 UTC m=+0.181218420 container init 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.154605808 +0000 UTC m=+0.189705916 container start 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:23:13 np0005592157 wonderful_merkle[348422]: 167 167
Jan 22 10:23:13 np0005592157 systemd[1]: libpod-56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a.scope: Deactivated successfully.
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.160223594 +0000 UTC m=+0.195323712 container attach 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.160546472 +0000 UTC m=+0.195646570 container died 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:23:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-89c4633995331cf999598d07278ef2cc146f08ad6f1c271c5ae30662ed9dc04e-merged.mount: Deactivated successfully.
Jan 22 10:23:13 np0005592157 podman[348406]: 2026-01-22 15:23:13.274248513 +0000 UTC m=+0.309348611 container remove 56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:23:13 np0005592157 systemd[1]: libpod-conmon-56908a72191fe49cd2d40e0d3942d979720baaaaf0e0883c1abd2df2bae10f1a.scope: Deactivated successfully.
Jan 22 10:23:13 np0005592157 podman[348448]: 2026-01-22 15:23:13.429182474 +0000 UTC m=+0.028088273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:23:13 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:13 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:13 np0005592157 podman[348448]: 2026-01-22 15:23:13.648784386 +0000 UTC m=+0.247690165 container create 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:13 np0005592157 systemd[1]: Started libpod-conmon-76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a.scope.
Jan 22 10:23:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:23:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7063dfbd269fbe702ce9bb271d9fd6d57794271421ca4f39bfcfca2e5db58149/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7063dfbd269fbe702ce9bb271d9fd6d57794271421ca4f39bfcfca2e5db58149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7063dfbd269fbe702ce9bb271d9fd6d57794271421ca4f39bfcfca2e5db58149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7063dfbd269fbe702ce9bb271d9fd6d57794271421ca4f39bfcfca2e5db58149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:23:13 np0005592157 podman[348448]: 2026-01-22 15:23:13.980031158 +0000 UTC m=+0.578936957 container init 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:23:13 np0005592157 podman[348448]: 2026-01-22 15:23:13.985950572 +0000 UTC m=+0.584856351 container start 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:23:14 np0005592157 podman[348448]: 2026-01-22 15:23:14.02664664 +0000 UTC m=+0.625552439 container attach 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:14 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:14 np0005592157 bold_herschel[348466]: {
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:        "osd_id": 0,
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:        "type": "bluestore"
Jan 22 10:23:14 np0005592157 bold_herschel[348466]:    }
Jan 22 10:23:14 np0005592157 bold_herschel[348466]: }
Jan 22 10:23:14 np0005592157 systemd[1]: libpod-76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a.scope: Deactivated successfully.
Jan 22 10:23:14 np0005592157 podman[348448]: 2026-01-22 15:23:14.830116508 +0000 UTC m=+1.429022307 container died 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:23:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7063dfbd269fbe702ce9bb271d9fd6d57794271421ca4f39bfcfca2e5db58149-merged.mount: Deactivated successfully.
Jan 22 10:23:14 np0005592157 podman[348448]: 2026-01-22 15:23:14.886404845 +0000 UTC m=+1.485310634 container remove 76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:14 np0005592157 systemd[1]: libpod-conmon-76bcfb9299668664a79c5fe6aedc2c5fe81a837883de297c8ba23a2d61a8020a.scope: Deactivated successfully.
Jan 22 10:23:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:23:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:23:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 535dfc9c-bccd-45c3-b0ee-e569860b4938 does not exist
Jan 22 10:23:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 93032078-d7d1-4aa5-abea-f5d66cd41865 does not exist
Jan 22 10:23:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cf0e3992-df9d-482a-a1d5-5c1b3439d9c9 does not exist
Jan 22 10:23:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:15.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:16 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:17.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.727757) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397727903, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 487, "num_deletes": 278, "total_data_size": 352263, "memory_usage": 360728, "flush_reason": "Manual Compaction"}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397732006, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 315797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98760, "largest_seqno": 99246, "table_properties": {"data_size": 313176, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8080, "raw_average_key_size": 21, "raw_value_size": 307427, "raw_average_value_size": 819, "num_data_blocks": 25, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 278, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095381, "oldest_key_time": 1769095381, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 4315 microseconds, and 1894 cpu microseconds.
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.732086) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 315797 bytes OK
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.732104) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733611) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733624) EVENT_LOG_v1 {"time_micros": 1769095397733619, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733645) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 349260, prev total WAL file size 349260, number of live WAL files 2.
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(308KB)], [221(12MB)]
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397734448, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 13925292, "oldest_snapshot_seqno": -1}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 14206 keys, 10036311 bytes, temperature: kUnknown
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397815243, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 10036311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961593, "index_size": 38125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 390713, "raw_average_key_size": 27, "raw_value_size": 9722694, "raw_average_value_size": 684, "num_data_blocks": 1369, "num_entries": 14206, "num_filter_entries": 14206, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.816276) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10036311 bytes
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.818021) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.1 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.9) write-amplify(31.8) OK, records in: 14772, records dropped: 566 output_compression: NoCompression
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.818068) EVENT_LOG_v1 {"time_micros": 1769095397818047, "job": 140, "event": "compaction_finished", "compaction_time_micros": 81391, "compaction_time_cpu_micros": 29890, "output_level": 6, "num_output_files": 1, "total_output_size": 10036311, "num_input_records": 14772, "num_output_records": 14206, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397818387, "job": 140, "event": "table_file_deletion", "file_number": 223}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397823861, "job": 140, "event": "table_file_deletion", "file_number": 221}
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.824039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.824049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.824053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.824056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:17.824059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:18 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:18 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:19 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:20 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:21.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:21 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:22 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:23.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:23 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:23.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:24 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:25.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:25 np0005592157 podman[348553]: 2026-01-22 15:23:25.33505549 +0000 UTC m=+0.063287788 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 10:23:25 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:25.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:26 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:27.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:27.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:27 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:29.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:29 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:29.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:30 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:30 np0005592157 podman[348627]: 2026-01-22 15:23:30.364264125 +0000 UTC m=+0.083473898 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:23:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:31.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:31 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:32 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:32 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:33.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:33 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:35.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:35 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:36 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:37.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:37 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:38 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:38 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:23:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:23:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:40 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:40 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:41 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:43 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:44 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:44 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:45.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:45.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:45 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:23:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:23:46 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:23:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:47.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:23:47
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:23:47.660 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:23:47.661 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:23:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:23:47.661 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:23:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:47 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:48 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:23:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:23:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:50 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:51 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:51.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:52 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:53 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:54 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:55 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:56 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:56 np0005592157 podman[348716]: 2026-01-22 15:23:56.313636685 +0000 UTC m=+0.048242922 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:23:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:57.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.773958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437774091, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 760, "num_deletes": 325, "total_data_size": 730232, "memory_usage": 743760, "flush_reason": "Manual Compaction"}
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437875168, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 717997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99247, "largest_seqno": 100006, "table_properties": {"data_size": 714385, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10658, "raw_average_key_size": 20, "raw_value_size": 706207, "raw_average_value_size": 1368, "num_data_blocks": 53, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 325, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095398, "oldest_key_time": 1769095398, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 101304 microseconds, and 3813 cpu microseconds.
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.875265) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 717997 bytes OK
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.875284) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.886404) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.886431) EVENT_LOG_v1 {"time_micros": 1769095437886423, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.886455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 726027, prev total WAL file size 726027, number of live WAL files 2.
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.887216) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035303334' seq:72057594037927935, type:22 .. '6C6F676D0035323837' seq:0, type:0; will stop at (end)
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(701KB)], [224(9801KB)]
Jan 22 10:23:57 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437887330, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 10754308, "oldest_snapshot_seqno": -1}
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 14063 keys, 10583965 bytes, temperature: kUnknown
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095438092327, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 10583965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509231, "index_size": 38461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35205, "raw_key_size": 388398, "raw_average_key_size": 27, "raw_value_size": 10271906, "raw_average_value_size": 730, "num_data_blocks": 1380, "num_entries": 14063, "num_filter_entries": 14063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.092681) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 10583965 bytes
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.097614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.4 rd, 51.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(29.7) write-amplify(14.7) OK, records in: 14722, records dropped: 659 output_compression: NoCompression
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.097637) EVENT_LOG_v1 {"time_micros": 1769095438097627, "job": 142, "event": "compaction_finished", "compaction_time_micros": 205111, "compaction_time_cpu_micros": 27845, "output_level": 6, "num_output_files": 1, "total_output_size": 10583965, "num_input_records": 14722, "num_output_records": 14063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095438098148, "job": 142, "event": "table_file_deletion", "file_number": 226}
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095438099984, "job": 142, "event": "table_file_deletion", "file_number": 224}
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:57.887114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.100054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.100058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.100059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.100061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:23:58.100062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:59.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:23:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:23:59 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:01 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592157 podman[348739]: 2026-01-22 15:24:01.383066488 +0000 UTC m=+0.118395345 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:24:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:02 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:02 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:04 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:04 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:05 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002778468441745766 of space, bias 1.0, pg target 0.8224266587567467 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.20540491810906386 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 22 10:24:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:05.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:06 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:07 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:08 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:08 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:09 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 10:24:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:09.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:10 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:11 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 10:24:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:11.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:12 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 586 KiB/s wr, 19 op/s
Jan 22 10:24:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:13 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592157 ceph-mon[74359]: Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:15 np0005592157 ceph-mon[74359]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 10:24:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4988b822-ad36-4594-a49b-6ece67c93b98 does not exist
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fb34d083-bcb9-4d71-b7f5-2949234f973a does not exist
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cd6f4b42-6899-419d-84ca-ae29f4f566fe does not exist
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:24:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:16 np0005592157 podman[349094]: 2026-01-22 15:24:16.891415347 +0000 UTC m=+0.051445170 container create ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:24:16 np0005592157 systemd[1]: Started libpod-conmon-ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb.scope.
Jan 22 10:24:16 np0005592157 podman[349094]: 2026-01-22 15:24:16.865222411 +0000 UTC m=+0.025252264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:16 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:16 np0005592157 podman[349094]: 2026-01-22 15:24:16.989762455 +0000 UTC m=+0.149792358 container init ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 22 10:24:16 np0005592157 podman[349094]: 2026-01-22 15:24:16.998722583 +0000 UTC m=+0.158752406 container start ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:24:17 np0005592157 lucid_curran[349110]: 167 167
Jan 22 10:24:17 np0005592157 systemd[1]: libpod-ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb.scope: Deactivated successfully.
Jan 22 10:24:17 np0005592157 podman[349094]: 2026-01-22 15:24:17.012133868 +0000 UTC m=+0.172163721 container attach ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:24:17 np0005592157 podman[349094]: 2026-01-22 15:24:17.012870716 +0000 UTC m=+0.172900579 container died ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 10:24:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f79c9fa57fc31266a95065dc2759d7fbd452cac9355b61a79f5f1dc9058f7e6c-merged.mount: Deactivated successfully.
Jan 22 10:24:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:17 np0005592157 podman[349094]: 2026-01-22 15:24:17.140993747 +0000 UTC m=+0.301023570 container remove ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_curran, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:24:17 np0005592157 systemd[1]: libpod-conmon-ba14eb8fc0f4f2efee44a26705b6291ad32ee9794e40b7edbd97136e47e853eb.scope: Deactivated successfully.
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:24:17 np0005592157 podman[349136]: 2026-01-22 15:24:17.334876344 +0000 UTC m=+0.048749884 container create c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 10:24:17 np0005592157 systemd[1]: Started libpod-conmon-c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5.scope.
Jan 22 10:24:17 np0005592157 podman[349136]: 2026-01-22 15:24:17.315223297 +0000 UTC m=+0.029096857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:17 np0005592157 podman[349136]: 2026-01-22 15:24:17.459900149 +0000 UTC m=+0.173773769 container init c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:24:17 np0005592157 podman[349136]: 2026-01-22 15:24:17.46819542 +0000 UTC m=+0.182068980 container start c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:24:17 np0005592157 podman[349136]: 2026-01-22 15:24:17.483710587 +0000 UTC m=+0.197584187 container attach c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:24:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 10:24:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:18 np0005592157 xenodochial_mestorf[349153]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:24:18 np0005592157 xenodochial_mestorf[349153]: --> relative data size: 1.0
Jan 22 10:24:18 np0005592157 xenodochial_mestorf[349153]: --> All data devices are unavailable
Jan 22 10:24:18 np0005592157 systemd[1]: libpod-c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5.scope: Deactivated successfully.
Jan 22 10:24:18 np0005592157 podman[349136]: 2026-01-22 15:24:18.275762797 +0000 UTC m=+0.989636357 container died c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:24:18 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:18 np0005592157 ceph-mon[74359]: Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-33be69d519dad305baec1f72b4f6c55fc3202089f80f0f79ca5f0b0212bb9991-merged.mount: Deactivated successfully.
Jan 22 10:24:18 np0005592157 podman[349136]: 2026-01-22 15:24:18.555596292 +0000 UTC m=+1.269469872 container remove c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:24:18 np0005592157 systemd[1]: libpod-conmon-c2bf94f649b698bf3d7bc431049681e9203366478addc33ab096d45fa47c1ad5.scope: Deactivated successfully.
Jan 22 10:24:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.220522916 +0000 UTC m=+0.045446085 container create 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:24:19 np0005592157 systemd[1]: Started libpod-conmon-4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef.scope.
Jan 22 10:24:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.199880664 +0000 UTC m=+0.024803853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.414238529 +0000 UTC m=+0.239161708 container init 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:24:19 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.428971707 +0000 UTC m=+0.253894886 container start 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:24:19 np0005592157 dazzling_kirch[349339]: 167 167
Jan 22 10:24:19 np0005592157 systemd[1]: libpod-4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef.scope: Deactivated successfully.
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.557875786 +0000 UTC m=+0.382798955 container attach 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.558313137 +0000 UTC m=+0.383236306 container died 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:24:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1a97376b7dbc4b3e6462fb10e87dc7f63551d6163e426bfe23e2b3ce53421c9d-merged.mount: Deactivated successfully.
Jan 22 10:24:19 np0005592157 podman[349323]: 2026-01-22 15:24:19.601849434 +0000 UTC m=+0.426772643 container remove 4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:24:19 np0005592157 systemd[1]: libpod-conmon-4c218027f34229d323d9478d83d9a8e2dc4e232e96d0032eb398f652010059ef.scope: Deactivated successfully.
Jan 22 10:24:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 10:24:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:19 np0005592157 podman[349363]: 2026-01-22 15:24:19.776482334 +0000 UTC m=+0.043226060 container create 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:24:19 np0005592157 systemd[1]: Started libpod-conmon-06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea.scope.
Jan 22 10:24:19 np0005592157 podman[349363]: 2026-01-22 15:24:19.757694738 +0000 UTC m=+0.024438484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f962bdda7e9a09c89e57358b2732269342313e27fae2c5c945edb775c7c307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f962bdda7e9a09c89e57358b2732269342313e27fae2c5c945edb775c7c307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f962bdda7e9a09c89e57358b2732269342313e27fae2c5c945edb775c7c307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f962bdda7e9a09c89e57358b2732269342313e27fae2c5c945edb775c7c307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:19 np0005592157 podman[349363]: 2026-01-22 15:24:19.918454901 +0000 UTC m=+0.185198717 container init 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:24:19 np0005592157 podman[349363]: 2026-01-22 15:24:19.927080661 +0000 UTC m=+0.193824397 container start 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:24:19 np0005592157 podman[349363]: 2026-01-22 15:24:19.932221885 +0000 UTC m=+0.198965691 container attach 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:24:20 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]: {
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:    "0": [
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:        {
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "devices": [
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "/dev/loop3"
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            ],
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "lv_name": "ceph_lv0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "lv_size": "7511998464",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "name": "ceph_lv0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "tags": {
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.cluster_name": "ceph",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.crush_device_class": "",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.encrypted": "0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.osd_id": "0",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.type": "block",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:                "ceph.vdo": "0"
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            },
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "type": "block",
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:            "vg_name": "ceph_vg0"
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:        }
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]:    ]
Jan 22 10:24:20 np0005592157 lucid_thompson[349380]: }
Jan 22 10:24:20 np0005592157 systemd[1]: libpod-06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea.scope: Deactivated successfully.
Jan 22 10:24:20 np0005592157 podman[349363]: 2026-01-22 15:24:20.727699369 +0000 UTC m=+0.994443115 container died 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:24:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-10f962bdda7e9a09c89e57358b2732269342313e27fae2c5c945edb775c7c307-merged.mount: Deactivated successfully.
Jan 22 10:24:20 np0005592157 podman[349363]: 2026-01-22 15:24:20.782431148 +0000 UTC m=+1.049174864 container remove 06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:24:20 np0005592157 systemd[1]: libpod-conmon-06d270d5361ca0d85ef020f9f4ee8f7e3fa8545e35f7ce09e7374474f0bc11ea.scope: Deactivated successfully.
Jan 22 10:24:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:21 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.467972191 +0000 UTC m=+0.044344507 container create 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:24:21 np0005592157 systemd[1]: Started libpod-conmon-6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c.scope.
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.446240104 +0000 UTC m=+0.022612440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.56427716 +0000 UTC m=+0.140649496 container init 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.57295651 +0000 UTC m=+0.149328816 container start 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.577150712 +0000 UTC m=+0.153523058 container attach 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 10:24:21 np0005592157 agitated_northcutt[349561]: 167 167
Jan 22 10:24:21 np0005592157 systemd[1]: libpod-6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c.scope: Deactivated successfully.
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.579281774 +0000 UTC m=+0.155654090 container died 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:24:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1d62005cb5b45b0955b8aa5ea7929822d32317314fb3c4dcc5cb07dbc496fb34-merged.mount: Deactivated successfully.
Jan 22 10:24:21 np0005592157 podman[349544]: 2026-01-22 15:24:21.628043878 +0000 UTC m=+0.204416194 container remove 6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:24:21 np0005592157 systemd[1]: libpod-conmon-6a033a054c6bec3a6c470a9a5eb63a7fc6b71b8a1395afe77ed436186cce984c.scope: Deactivated successfully.
Jan 22 10:24:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 10:24:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:21 np0005592157 podman[349584]: 2026-01-22 15:24:21.801707434 +0000 UTC m=+0.043276391 container create 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 10:24:21 np0005592157 systemd[1]: Started libpod-conmon-8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668.scope.
Jan 22 10:24:21 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:24:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2509a66c845b17260fa56542c302312681b77f6439c40374c72a70701f8ac07e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2509a66c845b17260fa56542c302312681b77f6439c40374c72a70701f8ac07e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2509a66c845b17260fa56542c302312681b77f6439c40374c72a70701f8ac07e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:21 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2509a66c845b17260fa56542c302312681b77f6439c40374c72a70701f8ac07e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:24:21 np0005592157 podman[349584]: 2026-01-22 15:24:21.785577213 +0000 UTC m=+0.027146200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:24:21 np0005592157 podman[349584]: 2026-01-22 15:24:21.885697624 +0000 UTC m=+0.127266611 container init 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:24:21 np0005592157 podman[349584]: 2026-01-22 15:24:21.892240343 +0000 UTC m=+0.133809310 container start 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:24:21 np0005592157 podman[349584]: 2026-01-22 15:24:21.895681026 +0000 UTC m=+0.137249993 container attach 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:22 np0005592157 cool_panini[349602]: {
Jan 22 10:24:22 np0005592157 cool_panini[349602]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:24:22 np0005592157 cool_panini[349602]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:24:22 np0005592157 cool_panini[349602]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:24:22 np0005592157 cool_panini[349602]:        "osd_id": 0,
Jan 22 10:24:22 np0005592157 cool_panini[349602]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:24:22 np0005592157 cool_panini[349602]:        "type": "bluestore"
Jan 22 10:24:22 np0005592157 cool_panini[349602]:    }
Jan 22 10:24:22 np0005592157 cool_panini[349602]: }
Jan 22 10:24:22 np0005592157 systemd[1]: libpod-8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668.scope: Deactivated successfully.
Jan 22 10:24:22 np0005592157 podman[349584]: 2026-01-22 15:24:22.767625756 +0000 UTC m=+1.009194723 container died 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:22 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2509a66c845b17260fa56542c302312681b77f6439c40374c72a70701f8ac07e-merged.mount: Deactivated successfully.
Jan 22 10:24:22 np0005592157 podman[349584]: 2026-01-22 15:24:22.822828987 +0000 UTC m=+1.064397954 container remove 8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:24:22 np0005592157 systemd[1]: libpod-conmon-8d077f5dd92ee543082501d603241e30d43eff6dfd29d1e4cde6f613a9ae3668.scope: Deactivated successfully.
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:24:22 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e042b12d-319c-4785-8d69-452786cdef5a does not exist
Jan 22 10:24:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev eb6acaa7-fa72-4478-af83-385357b9b641 does not exist
Jan 22 10:24:22 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a6eae9c7-11e0-43e7-a3f0-503006caed4b does not exist
Jan 22 10:24:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:23 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592157 ceph-mon[74359]: Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Jan 22 10:24:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:24 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:25.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:25 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 845 KiB/s wr, 17 op/s
Jan 22 10:24:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:26 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:24:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:24:27 np0005592157 podman[349686]: 2026-01-22 15:24:27.324307919 +0000 UTC m=+0.059988198 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:24:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:27 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592157 ceph-mon[74359]: Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:29 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:30 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:31 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:24:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 17K writes, 54K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 5909 syncs, 2.99 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 975 writes, 1485 keys, 975 commit groups, 1.0 writes per commit group, ingest: 0.48 MB, 0.00 MB/s#012Interval WAL: 975 writes, 437 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:24:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:32 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:32 np0005592157 podman[349760]: 2026-01-22 15:24:32.846711008 +0000 UTC m=+0.089990806 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:24:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:24:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:33.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:24:33 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:33 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:35 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:24:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:24:36 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:36 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:37 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592157 ceph-mon[74359]: Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:38 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:38 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:39.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:39 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:42 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:43 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:43 np0005592157 ceph-mon[74359]: Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:43.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:44 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 10:24:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:45 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:45.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:46 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:24:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:24:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:47 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:24:47
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:24:47.662 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:24:47.663 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:24:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:24:47.663 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:24:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:48 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:48 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:49 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:24:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:49.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:24:50 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:51 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:53 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:54 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:55 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592157 ceph-mon[74359]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:55.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:56 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:24:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:24:57 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:58 np0005592157 podman[349849]: 2026-01-22 15:24:58.311671513 +0000 UTC m=+0.056256867 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:24:58 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:58 np0005592157 ceph-mon[74359]: Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:59 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:24:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:24:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:00 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:01.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:01 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:03 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592157 ceph-mon[74359]: Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:03 np0005592157 podman[349874]: 2026-01-22 15:25:03.339641429 +0000 UTC m=+0.074607342 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:25:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:04 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:05 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:25:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:06 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:07.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:07 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:08 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:08 np0005592157 ceph-mon[74359]: Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:09 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:10 np0005592157 ceph-mon[74359]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:12 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:13 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592157 ceph-mon[74359]: Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:13.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:14 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:15 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:16 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:17 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:17.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:18 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:18 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:19 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:20 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:22 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:23 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 83c1bc03-1802-4389-aac4-afdf554681ae does not exist
Jan 22 10:25:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev df7fee71-1996-4f9e-813d-bd23d8df5c4c does not exist
Jan 22 10:25:24 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d91211ed-d07d-48dc-9222-472081fd5faf does not exist
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:24 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:25:24 np0005592157 podman[350239]: 2026-01-22 15:25:24.658171625 +0000 UTC m=+0.043922887 container create 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:24 np0005592157 systemd[1]: Started libpod-conmon-134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335.scope.
Jan 22 10:25:24 np0005592157 podman[350239]: 2026-01-22 15:25:24.638909688 +0000 UTC m=+0.024660970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:24 np0005592157 podman[350239]: 2026-01-22 15:25:24.761461873 +0000 UTC m=+0.147213145 container init 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 10:25:24 np0005592157 podman[350239]: 2026-01-22 15:25:24.770301078 +0000 UTC m=+0.156052340 container start 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:25:24 np0005592157 podman[350239]: 2026-01-22 15:25:24.774230633 +0000 UTC m=+0.159981895 container attach 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:25:24 np0005592157 cranky_payne[350255]: 167 167
Jan 22 10:25:24 np0005592157 systemd[1]: libpod-134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335.scope: Deactivated successfully.
Jan 22 10:25:24 np0005592157 conmon[350255]: conmon 134f14edf89b165cf254 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335.scope/container/memory.events
Jan 22 10:25:24 np0005592157 podman[350260]: 2026-01-22 15:25:24.819757469 +0000 UTC m=+0.026118676 container died 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:25:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-35fa6eff1cdda243a5d9bc17afe5ca9bba0010170b2fee5e301efa82a7b3dd0a-merged.mount: Deactivated successfully.
Jan 22 10:25:24 np0005592157 podman[350260]: 2026-01-22 15:25:24.858285394 +0000 UTC m=+0.064646601 container remove 134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:25:24 np0005592157 systemd[1]: libpod-conmon-134f14edf89b165cf2548713da475a04b0e1f2307695604af515fe79032d8335.scope: Deactivated successfully.
Jan 22 10:25:25 np0005592157 podman[350282]: 2026-01-22 15:25:25.03440065 +0000 UTC m=+0.041954270 container create 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 22 10:25:25 np0005592157 systemd[1]: Started libpod-conmon-8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7.scope.
Jan 22 10:25:25 np0005592157 podman[350282]: 2026-01-22 15:25:25.014787814 +0000 UTC m=+0.022341414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:25 np0005592157 podman[350282]: 2026-01-22 15:25:25.13157961 +0000 UTC m=+0.139133200 container init 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:25:25 np0005592157 podman[350282]: 2026-01-22 15:25:25.145632721 +0000 UTC m=+0.153186291 container start 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:25 np0005592157 podman[350282]: 2026-01-22 15:25:25.149079324 +0000 UTC m=+0.156632904 container attach 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:25:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:25 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:25 np0005592157 thirsty_bartik[350299]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:25:25 np0005592157 thirsty_bartik[350299]: --> relative data size: 1.0
Jan 22 10:25:25 np0005592157 thirsty_bartik[350299]: --> All data devices are unavailable
Jan 22 10:25:25 np0005592157 systemd[1]: libpod-8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7.scope: Deactivated successfully.
Jan 22 10:25:26 np0005592157 podman[350315]: 2026-01-22 15:25:26.013844169 +0000 UTC m=+0.020548779 container died 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e03743d24c3e4c07dab606c31f23f67ade10dc7157f86d40fa19da63e51aa2d8-merged.mount: Deactivated successfully.
Jan 22 10:25:26 np0005592157 podman[350315]: 2026-01-22 15:25:26.065115344 +0000 UTC m=+0.071819934 container remove 8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:26 np0005592157 systemd[1]: libpod-conmon-8fa76eecc1f40749a20a3947717ca08f7541b58bc5948296df0fcc01800f66c7.scope: Deactivated successfully.
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.702486579 +0000 UTC m=+0.039703655 container create 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:26 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:26 np0005592157 systemd[1]: Started libpod-conmon-9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c.scope.
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.684294008 +0000 UTC m=+0.021511064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:26 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.800159151 +0000 UTC m=+0.137376207 container init 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.810387169 +0000 UTC m=+0.147604205 container start 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.81411734 +0000 UTC m=+0.151334396 container attach 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:25:26 np0005592157 happy_bell[350488]: 167 167
Jan 22 10:25:26 np0005592157 systemd[1]: libpod-9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c.scope: Deactivated successfully.
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.817385369 +0000 UTC m=+0.154602405 container died 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 22 10:25:26 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8e1f6644186889c82077eb01693b3bff762cd014c27685e3ce4aa38c2a42e4b5-merged.mount: Deactivated successfully.
Jan 22 10:25:26 np0005592157 podman[350472]: 2026-01-22 15:25:26.854759136 +0000 UTC m=+0.191976172 container remove 9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:25:26 np0005592157 systemd[1]: libpod-conmon-9bbf6dcf5db7bbde0c9d3b976b56b41e343e291081250e0aee700929cbcb2c3c.scope: Deactivated successfully.
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.008334505 +0000 UTC m=+0.045796493 container create 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:25:27 np0005592157 systemd[1]: Started libpod-conmon-765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1.scope.
Jan 22 10:25:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728150b868261a789eb1c29ae91f36d5b0258989f7ae5e4ed202b081311a924f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728150b868261a789eb1c29ae91f36d5b0258989f7ae5e4ed202b081311a924f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728150b868261a789eb1c29ae91f36d5b0258989f7ae5e4ed202b081311a924f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:26.985107151 +0000 UTC m=+0.022569119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:27 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728150b868261a789eb1c29ae91f36d5b0258989f7ae5e4ed202b081311a924f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.09871754 +0000 UTC m=+0.136179538 container init 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.105623707 +0000 UTC m=+0.143085655 container start 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.110114706 +0000 UTC m=+0.147576674 container attach 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:25:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:27 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:27.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:27 np0005592157 determined_jemison[350527]: {
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:    "0": [
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:        {
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "devices": [
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "/dev/loop3"
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            ],
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "lv_name": "ceph_lv0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "lv_size": "7511998464",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "name": "ceph_lv0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "tags": {
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.cluster_name": "ceph",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.crush_device_class": "",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.encrypted": "0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.osd_id": "0",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.type": "block",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:                "ceph.vdo": "0"
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            },
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "type": "block",
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:            "vg_name": "ceph_vg0"
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:        }
Jan 22 10:25:27 np0005592157 determined_jemison[350527]:    ]
Jan 22 10:25:27 np0005592157 determined_jemison[350527]: }
Jan 22 10:25:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:27 np0005592157 systemd[1]: libpod-765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1.scope: Deactivated successfully.
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.943851429 +0000 UTC m=+0.981313397 container died 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:25:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-728150b868261a789eb1c29ae91f36d5b0258989f7ae5e4ed202b081311a924f-merged.mount: Deactivated successfully.
Jan 22 10:25:27 np0005592157 podman[350511]: 2026-01-22 15:25:27.996339323 +0000 UTC m=+1.033801281 container remove 765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jemison, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 10:25:28 np0005592157 systemd[1]: libpod-conmon-765eea26a12759f1629a6d5850e5f049a4cce112329aede8405b509566928ed1.scope: Deactivated successfully.
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.585917718 +0000 UTC m=+0.053653924 container create d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:25:28 np0005592157 systemd[1]: Started libpod-conmon-d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163.scope.
Jan 22 10:25:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.567430059 +0000 UTC m=+0.035166285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.67372916 +0000 UTC m=+0.141465386 container init d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.680692189 +0000 UTC m=+0.148428375 container start d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.684184944 +0000 UTC m=+0.151921140 container attach d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:25:28 np0005592157 romantic_merkle[350708]: 167 167
Jan 22 10:25:28 np0005592157 systemd[1]: libpod-d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163.scope: Deactivated successfully.
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.686138901 +0000 UTC m=+0.153875097 container died d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:25:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b71cc1cfd527036bd0128a2ac65fd2c8b418e6375bb7e843962ddd0cb6f64098-merged.mount: Deactivated successfully.
Jan 22 10:25:28 np0005592157 podman[350705]: 2026-01-22 15:25:28.719208854 +0000 UTC m=+0.082524564 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:25:28 np0005592157 podman[350691]: 2026-01-22 15:25:28.72482202 +0000 UTC m=+0.192558216 container remove d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:25:28 np0005592157 systemd[1]: libpod-conmon-d987c11f9ca37e842b9121aaceeb1a61812004726c76df1ef280072c14375163.scope: Deactivated successfully.
Jan 22 10:25:28 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:28 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:28 np0005592157 podman[350747]: 2026-01-22 15:25:28.868716514 +0000 UTC m=+0.021443821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:25:29 np0005592157 podman[350747]: 2026-01-22 15:25:29.002869751 +0000 UTC m=+0.155597038 container create 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:25:29 np0005592157 systemd[1]: Started libpod-conmon-3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81.scope.
Jan 22 10:25:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:25:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e338c6e4c2ea9edfe9b577b55d744e1048fb8f82a62c77d5a91517f28b61da4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e338c6e4c2ea9edfe9b577b55d744e1048fb8f82a62c77d5a91517f28b61da4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e338c6e4c2ea9edfe9b577b55d744e1048fb8f82a62c77d5a91517f28b61da4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e338c6e4c2ea9edfe9b577b55d744e1048fb8f82a62c77d5a91517f28b61da4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:25:29 np0005592157 podman[350747]: 2026-01-22 15:25:29.129900834 +0000 UTC m=+0.282628151 container init 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:25:29 np0005592157 podman[350747]: 2026-01-22 15:25:29.135510791 +0000 UTC m=+0.288238078 container start 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:25:29 np0005592157 podman[350747]: 2026-01-22 15:25:29.138917763 +0000 UTC m=+0.291645080 container attach 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:25:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:29 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:29 np0005592157 confident_boyd[350763]: {
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:        "osd_id": 0,
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:        "type": "bluestore"
Jan 22 10:25:29 np0005592157 confident_boyd[350763]:    }
Jan 22 10:25:29 np0005592157 confident_boyd[350763]: }
Jan 22 10:25:29 np0005592157 systemd[1]: libpod-3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81.scope: Deactivated successfully.
Jan 22 10:25:29 np0005592157 podman[350747]: 2026-01-22 15:25:29.949296029 +0000 UTC m=+1.102023326 container died 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:25:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e338c6e4c2ea9edfe9b577b55d744e1048fb8f82a62c77d5a91517f28b61da4b-merged.mount: Deactivated successfully.
Jan 22 10:25:30 np0005592157 podman[350747]: 2026-01-22 15:25:30.006535039 +0000 UTC m=+1.159262326 container remove 3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:25:30 np0005592157 systemd[1]: libpod-conmon-3271a7c4e8413f64aa7efafcfde66b045f399a194527752c11782f6830f1bc81.scope: Deactivated successfully.
Jan 22 10:25:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:25:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:25:30 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1f161485-263d-48de-b9ff-889e1af8f42e does not exist
Jan 22 10:25:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0427418e-11d0-4689-ab58-4edc81918c9b does not exist
Jan 22 10:25:30 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d00e0cb5-5c29-46f1-9ad8-7eacf17f0b2a does not exist
Jan 22 10:25:31 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:32 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:33 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:33.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:34 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:34 np0005592157 podman[350900]: 2026-01-22 15:25:34.370992863 +0000 UTC m=+0.096262668 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:25:35 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:35.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:36 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:38 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:38 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:39 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:25:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:25:40 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:41.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:42 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:42 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:43 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:43.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:43.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:44 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:45 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:45.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:45.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:25:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:25:47 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:25:47
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms']
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:25:47.664 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:25:47.664 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:25:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:25:47.664 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:25:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:48 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:25:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:49.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:25:49 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:50 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:50 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:51.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:51 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:51.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:52 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:53.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:53 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:53.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #228. Immutable memtables: 0.
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.061400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 228
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554061539, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 1766, "num_deletes": 449, "total_data_size": 2199640, "memory_usage": 2246128, "flush_reason": "Manual Compaction"}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #229: started
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554134029, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 229, "file_size": 2150401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100007, "largest_seqno": 101772, "table_properties": {"data_size": 2143190, "index_size": 3576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23099, "raw_average_key_size": 22, "raw_value_size": 2125694, "raw_average_value_size": 2088, "num_data_blocks": 155, "num_entries": 1018, "num_filter_entries": 1018, "num_deletions": 449, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095438, "oldest_key_time": 1769095438, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 229, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 72697 microseconds, and 6583 cpu microseconds.
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.134120) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #229: 2150401 bytes OK
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.134144) [db/memtable_list.cc:519] [default] Level-0 commit table #229 started
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.142969) [db/memtable_list.cc:722] [default] Level-0 commit table #229: memtable #1 done
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.142999) EVENT_LOG_v1 {"time_micros": 1769095554142992, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.143021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 2191137, prev total WAL file size 2191137, number of live WAL files 2.
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000225.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.144109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [229(2100KB)], [227(10MB)]
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554144151, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [229], "files_L6": [227], "score": -1, "input_data_size": 12734366, "oldest_snapshot_seqno": -1}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #230: 14170 keys, 10835501 bytes, temperature: kUnknown
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554225438, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 230, "file_size": 10835501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10759896, "index_size": 39076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390551, "raw_average_key_size": 27, "raw_value_size": 10520555, "raw_average_value_size": 742, "num_data_blocks": 1404, "num_entries": 14170, "num_filter_entries": 14170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.225727) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 10835501 bytes
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.226885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.5 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 15081, records dropped: 911 output_compression: NoCompression
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.226901) EVENT_LOG_v1 {"time_micros": 1769095554226894, "job": 144, "event": "compaction_finished", "compaction_time_micros": 81366, "compaction_time_cpu_micros": 27470, "output_level": 6, "num_output_files": 1, "total_output_size": 10835501, "num_input_records": 15081, "num_output_records": 14170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000229.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554227400, "job": 144, "event": "table_file_deletion", "file_number": 229}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554229232, "job": 144, "event": "table_file_deletion", "file_number": 227}
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.143899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.229307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.229311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.229313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.229315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:25:54.229317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:55 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:55.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:55.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:56 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:57.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:58 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:58 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:59 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:59 np0005592157 podman[350989]: 2026-01-22 15:25:59.344022366 +0000 UTC m=+0.068650248 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:25:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:25:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:25:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:59.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:00 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:26:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:01.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:26:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:01 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:01.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:02 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:04 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:04 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:26:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:05 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:05 np0005592157 podman[351012]: 2026-01-22 15:26:05.362957221 +0000 UTC m=+0.105408551 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:26:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:05.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:06 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:07.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:07 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:08 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:08 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:09.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:09 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:10 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:11 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:12 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:13 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:13.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:14 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:15 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:16 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:17 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:17 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:17 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:18 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:18 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:26:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:26:20 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:21.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:22 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:26:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:23.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:26:23 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:23.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:24 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:24 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:24 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:25.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:25 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:26 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:26:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:26:27 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:27 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:28 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:28 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:28 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:29.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:29 np0005592157 podman[351127]: 2026-01-22 15:26:29.90180233 +0000 UTC m=+0.074787915 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 10:26:30 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:31.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:31 np0005592157 podman[351343]: 2026-01-22 15:26:31.376894985 +0000 UTC m=+0.072194964 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:26:31 np0005592157 podman[351343]: 2026-01-22 15:26:31.467263439 +0000 UTC m=+0.162563388 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:26:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:31 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:32 np0005592157 podman[351495]: 2026-01-22 15:26:32.019355983 +0000 UTC m=+0.051804968 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:26:32 np0005592157 podman[351495]: 2026-01-22 15:26:32.026487487 +0000 UTC m=+0.058936472 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:26:32 np0005592157 podman[351560]: 2026-01-22 15:26:32.204294314 +0000 UTC m=+0.044706877 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Jan 22 10:26:32 np0005592157 podman[351560]: 2026-01-22 15:26:32.216182572 +0000 UTC m=+0.056595105 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4)
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:26:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 62e85800-f8fb-40a5-b1ae-3e8c4b8b3a0f does not exist
Jan 22 10:26:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e9430dab-4fd0-42a0-8315-f3b42033b22a does not exist
Jan 22 10:26:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 54a4c67f-bf9c-408e-8696-89d082b16c31 does not exist
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:26:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.487126479 +0000 UTC m=+0.032945981 container create 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:26:33 np0005592157 systemd[1]: Started libpod-conmon-4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c.scope.
Jan 22 10:26:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.549182986 +0000 UTC m=+0.095002508 container init 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.555609272 +0000 UTC m=+0.101428774 container start 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.558660116 +0000 UTC m=+0.104479638 container attach 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:26:33 np0005592157 suspicious_goodall[351876]: 167 167
Jan 22 10:26:33 np0005592157 systemd[1]: libpod-4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c.scope: Deactivated successfully.
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.56090513 +0000 UTC m=+0.106724632 container died 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.472596466 +0000 UTC m=+0.018415978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b557cdb9d6491428db8228b78026e1723bd149ea496d19d6b260f23a8c4bebc9-merged.mount: Deactivated successfully.
Jan 22 10:26:33 np0005592157 podman[351861]: 2026-01-22 15:26:33.597077669 +0000 UTC m=+0.142897171 container remove 4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:26:33 np0005592157 systemd[1]: libpod-conmon-4f5c3f4fa914801279b5de01fe5219dfd36f3e6c769dd2565caeb19375891b0c.scope: Deactivated successfully.
Jan 22 10:26:33 np0005592157 podman[351901]: 2026-01-22 15:26:33.744263572 +0000 UTC m=+0.041921529 container create 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:26:33 np0005592157 systemd[1]: Started libpod-conmon-19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626.scope.
Jan 22 10:26:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:33 np0005592157 podman[351901]: 2026-01-22 15:26:33.724522153 +0000 UTC m=+0.022180130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:33 np0005592157 podman[351901]: 2026-01-22 15:26:33.824620143 +0000 UTC m=+0.122278160 container init 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:26:33 np0005592157 podman[351901]: 2026-01-22 15:26:33.83067466 +0000 UTC m=+0.128332617 container start 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:26:33 np0005592157 podman[351901]: 2026-01-22 15:26:33.833745545 +0000 UTC m=+0.131403502 container attach 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:26:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:34 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:34 np0005592157 flamboyant_cartwright[351918]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:26:34 np0005592157 flamboyant_cartwright[351918]: --> relative data size: 1.0
Jan 22 10:26:34 np0005592157 flamboyant_cartwright[351918]: --> All data devices are unavailable
Jan 22 10:26:34 np0005592157 systemd[1]: libpod-19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626.scope: Deactivated successfully.
Jan 22 10:26:34 np0005592157 podman[351933]: 2026-01-22 15:26:34.679843958 +0000 UTC m=+0.024030375 container died 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:26:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-edefc9f689d418f813a28a9e5816b73c83d00a7f1c8142183da150a16675c703-merged.mount: Deactivated successfully.
Jan 22 10:26:34 np0005592157 podman[351933]: 2026-01-22 15:26:34.758544108 +0000 UTC m=+0.102730515 container remove 19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 10:26:34 np0005592157 systemd[1]: libpod-conmon-19f8bbded4fa5ea70e8c70b7b6f37a09806f66b09fba09fd907ea0c2da7d4626.scope: Deactivated successfully.
Jan 22 10:26:35 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:35.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.383166804 +0000 UTC m=+0.024522656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.495487021 +0000 UTC m=+0.136842833 container create d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:26:35 np0005592157 systemd[1]: Started libpod-conmon-d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9.scope.
Jan 22 10:26:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.69519106 +0000 UTC m=+0.336546882 container init d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.705322376 +0000 UTC m=+0.346678178 container start d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.70960872 +0000 UTC m=+0.350964552 container attach d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:26:35 np0005592157 boring_sinoussi[352120]: 167 167
Jan 22 10:26:35 np0005592157 systemd[1]: libpod-d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9.scope: Deactivated successfully.
Jan 22 10:26:35 np0005592157 podman[352089]: 2026-01-22 15:26:35.713424372 +0000 UTC m=+0.354780184 container died d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:26:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-96ac1564b9532ed20f76ce84a96dce6994043ee5bee5a0aa1b9af1d9cfefd32f-merged.mount: Deactivated successfully.
Jan 22 10:26:35 np0005592157 podman[352103]: 2026-01-22 15:26:35.751102747 +0000 UTC m=+0.208292828 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 10:26:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:36 np0005592157 podman[352089]: 2026-01-22 15:26:36.012264918 +0000 UTC m=+0.653620730 container remove d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:26:36 np0005592157 systemd[1]: libpod-conmon-d082c6c8eaaf59b56ca771ae9c60951778e228f2b4c8a8a48ed7b32b36ccd4f9.scope: Deactivated successfully.
Jan 22 10:26:36 np0005592157 podman[352158]: 2026-01-22 15:26:36.184167622 +0000 UTC m=+0.058876571 container create 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:36 np0005592157 systemd[1]: Started libpod-conmon-209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c.scope.
Jan 22 10:26:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:36 np0005592157 podman[352158]: 2026-01-22 15:26:36.153490007 +0000 UTC m=+0.028199016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428b83f9ae128e4a5390809fb7ef829ba25c2771734933b2fd74554f706efd4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428b83f9ae128e4a5390809fb7ef829ba25c2771734933b2fd74554f706efd4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428b83f9ae128e4a5390809fb7ef829ba25c2771734933b2fd74554f706efd4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428b83f9ae128e4a5390809fb7ef829ba25c2771734933b2fd74554f706efd4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:36 np0005592157 podman[352158]: 2026-01-22 15:26:36.261345295 +0000 UTC m=+0.136054254 container init 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 10:26:36 np0005592157 podman[352158]: 2026-01-22 15:26:36.266982822 +0000 UTC m=+0.141691741 container start 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:36 np0005592157 podman[352158]: 2026-01-22 15:26:36.271499342 +0000 UTC m=+0.146208281 container attach 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:26:36 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]: {
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:    "0": [
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:        {
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "devices": [
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "/dev/loop3"
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            ],
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "lv_name": "ceph_lv0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "lv_size": "7511998464",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "name": "ceph_lv0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "tags": {
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.cluster_name": "ceph",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.crush_device_class": "",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.encrypted": "0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.osd_id": "0",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.type": "block",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:                "ceph.vdo": "0"
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            },
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "type": "block",
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:            "vg_name": "ceph_vg0"
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:        }
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]:    ]
Jan 22 10:26:37 np0005592157 heuristic_wright[352175]: }
Jan 22 10:26:37 np0005592157 systemd[1]: libpod-209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c.scope: Deactivated successfully.
Jan 22 10:26:37 np0005592157 podman[352158]: 2026-01-22 15:26:37.035363817 +0000 UTC m=+0.910072736 container died 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:26:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-428b83f9ae128e4a5390809fb7ef829ba25c2771734933b2fd74554f706efd4f-merged.mount: Deactivated successfully.
Jan 22 10:26:37 np0005592157 podman[352158]: 2026-01-22 15:26:37.088453256 +0000 UTC m=+0.963162175 container remove 209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:37 np0005592157 systemd[1]: libpod-conmon-209672e5b29476bf05135ce27c2eca7b030f46a8c5b63522ffdcc6cce9c1f03c.scope: Deactivated successfully.
Jan 22 10:26:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:37.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:37 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.693197069 +0000 UTC m=+0.052451814 container create cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:26:37 np0005592157 systemd[1]: Started libpod-conmon-cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9.scope.
Jan 22 10:26:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.665386824 +0000 UTC m=+0.024641669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.77232911 +0000 UTC m=+0.131583885 container init cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.779492804 +0000 UTC m=+0.138747559 container start cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.783158803 +0000 UTC m=+0.142413568 container attach cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:26:37 np0005592157 adoring_brattain[352349]: 167 167
Jan 22 10:26:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:37 np0005592157 systemd[1]: libpod-cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9.scope: Deactivated successfully.
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.785802788 +0000 UTC m=+0.145057553 container died cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:26:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a2e9c00660c2090a0df19dce7d07057e4b900166a2ed17f293d7b6b653b744f3-merged.mount: Deactivated successfully.
Jan 22 10:26:37 np0005592157 podman[352333]: 2026-01-22 15:26:37.840575167 +0000 UTC m=+0.199829962 container remove cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 10:26:37 np0005592157 systemd[1]: libpod-conmon-cd1019dd8aa51ab99d207da9e32035b85a8069583c06ca00f9921de5e9b6f2e9.scope: Deactivated successfully.
Jan 22 10:26:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:37 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:38 np0005592157 podman[352375]: 2026-01-22 15:26:38.049532181 +0000 UTC m=+0.053304275 container create 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:38 np0005592157 systemd[1]: Started libpod-conmon-2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22.scope.
Jan 22 10:26:38 np0005592157 podman[352375]: 2026-01-22 15:26:38.02644675 +0000 UTC m=+0.030218854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:26:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:26:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba0e6611bea453b9e344e8270aef8324f2035d1f9bb115fa538520621079c61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba0e6611bea453b9e344e8270aef8324f2035d1f9bb115fa538520621079c61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba0e6611bea453b9e344e8270aef8324f2035d1f9bb115fa538520621079c61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba0e6611bea453b9e344e8270aef8324f2035d1f9bb115fa538520621079c61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:26:38 np0005592157 podman[352375]: 2026-01-22 15:26:38.161044388 +0000 UTC m=+0.164816482 container init 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:26:38 np0005592157 podman[352375]: 2026-01-22 15:26:38.171367929 +0000 UTC m=+0.175140033 container start 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:26:38 np0005592157 podman[352375]: 2026-01-22 15:26:38.175318975 +0000 UTC m=+0.179091049 container attach 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:26:38 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:38 np0005592157 ceph-mon[74359]: Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:38 np0005592157 angry_swirles[352391]: {
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:        "osd_id": 0,
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:        "type": "bluestore"
Jan 22 10:26:38 np0005592157 angry_swirles[352391]:    }
Jan 22 10:26:38 np0005592157 angry_swirles[352391]: }
Jan 22 10:26:39 np0005592157 systemd[1]: libpod-2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22.scope: Deactivated successfully.
Jan 22 10:26:39 np0005592157 podman[352375]: 2026-01-22 15:26:39.014063429 +0000 UTC m=+1.017835533 container died 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:26:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-3ba0e6611bea453b9e344e8270aef8324f2035d1f9bb115fa538520621079c61-merged.mount: Deactivated successfully.
Jan 22 10:26:39 np0005592157 podman[352375]: 2026-01-22 15:26:39.065643481 +0000 UTC m=+1.069415565 container remove 2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:26:39 np0005592157 systemd[1]: libpod-conmon-2985b933f967e58c0cc3db05f3a6b6435dccb6633d869e22e6763996cf8efc22.scope: Deactivated successfully.
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 934ffd61-4233-4148-8d46-7a4fd39715b4 does not exist
Jan 22 10:26:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 72ebf939-ad51-41cb-86db-21a915ff3fda does not exist
Jan 22 10:26:39 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d7d6ed7f-01c2-48c2-b0bf-db66957e23e4 does not exist
Jan 22 10:26:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:39.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:40 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:40 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:41.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:41 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:42 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:43.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:43.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:43 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:43 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:44 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:45 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:26:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:26:46 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:47.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:26:47
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:26:47.666 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:26:47.667 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:26:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:26:47.667 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:26:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:47.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:47 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:47 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:49 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:49.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:50 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:52 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:52 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:53 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:53.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:54 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:26:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:26:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:56 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:57.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:57 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:57 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:58 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:58 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:59 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:26:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:26:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:00 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:00 np0005592157 podman[352540]: 2026-01-22 15:27:00.349080926 +0000 UTC m=+0.075900633 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:27:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:01.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 10:27:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:27:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:27:02 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:03 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 10:27:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:04 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:27:05 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 10:27:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:05.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:06 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:06 np0005592157 podman[352566]: 2026-01-22 15:27:06.378848411 +0000 UTC m=+0.116733735 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 22 10:27:07 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:07.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 10:27:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:07 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:07 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:08 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:08 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:09 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:09.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 10:27:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:10 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:11.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:11 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 10:27:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:11.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:12 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:12 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:13.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:13 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:13 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Jan 22 10:27:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:14 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:15.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:15 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 112 op/s
Jan 22 10:27:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:16 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:17.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 10:27:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:17 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:18 np0005592157 ceph-mon[74359]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:18 np0005592157 ceph-mon[74359]: Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 10:27:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:27:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:19.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:27:20 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:21.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:21 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 10:27:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:21.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:22 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:23.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:23 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592157 ceph-mon[74359]: Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 10:27:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:27:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:23.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:27:24 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:25.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:25 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 10:27:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:25.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:27.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:27.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:28 np0005592157 ceph-mon[74359]: Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:29.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:29 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:27:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:29.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:27:30 np0005592157 podman[352723]: 2026-01-22 15:27:30.475805176 +0000 UTC m=+0.057341933 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 10:27:30 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:31.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:32 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:33 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:33.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:33.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:34 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:35 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:36 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592157 podman[352746]: 2026-01-22 15:27:37.368883456 +0000 UTC m=+0.100582583 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:27:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:37.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:38 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:38 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:39 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:39.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:39.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:40 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:27:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:27:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:41.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:41 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:42 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:42 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:42 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5d0c9943-a10f-4ec0-b3d5-5d87abda7f32 does not exist
Jan 22 10:27:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a7cf2e7e-1907-447e-863d-105c232e543b does not exist
Jan 22 10:27:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9d309f84-e24e-4f23-91ef-f74ac5b90469 does not exist
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:43.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:43.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.265835397 +0000 UTC m=+0.064859266 container create bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:27:44 np0005592157 systemd[1]: Started libpod-conmon-bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8.scope.
Jan 22 10:27:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.240375979 +0000 UTC m=+0.039399898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.351544288 +0000 UTC m=+0.150568187 container init bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.362441973 +0000 UTC m=+0.161465842 container start bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.368373857 +0000 UTC m=+0.167397806 container attach bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:27:44 np0005592157 systemd[1]: libpod-bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8.scope: Deactivated successfully.
Jan 22 10:27:44 np0005592157 confident_meninsky[353064]: 167 167
Jan 22 10:27:44 np0005592157 conmon[353064]: conmon bd12a426e6f2cf42ba74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8.scope/container/memory.events
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.370427146 +0000 UTC m=+0.169451005 container died bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:27:44 np0005592157 systemd[1]: var-lib-containers-storage-overlay-498d31ccfe297037f7d2c025414eb3d7a3883b30ea94876f546b7a2224dc4803-merged.mount: Deactivated successfully.
Jan 22 10:27:44 np0005592157 podman[353048]: 2026-01-22 15:27:44.433480127 +0000 UTC m=+0.232504036 container remove bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:27:44 np0005592157 systemd[1]: libpod-conmon-bd12a426e6f2cf42ba74b3967808e6a1e12bd9dac2359a05def644ea7ef635b8.scope: Deactivated successfully.
Jan 22 10:27:44 np0005592157 podman[353088]: 2026-01-22 15:27:44.622579009 +0000 UTC m=+0.048575281 container create 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:27:44 np0005592157 systemd[1]: Started libpod-conmon-7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003.scope.
Jan 22 10:27:44 np0005592157 podman[353088]: 2026-01-22 15:27:44.600628926 +0000 UTC m=+0.026625188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:44 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:44 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:44 np0005592157 podman[353088]: 2026-01-22 15:27:44.729413892 +0000 UTC m=+0.155410144 container init 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:27:44 np0005592157 podman[353088]: 2026-01-22 15:27:44.740887851 +0000 UTC m=+0.166884093 container start 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:27:44 np0005592157 podman[353088]: 2026-01-22 15:27:44.745981735 +0000 UTC m=+0.171977977 container attach 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:27:44 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:45 np0005592157 elastic_lehmann[353105]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:27:45 np0005592157 elastic_lehmann[353105]: --> relative data size: 1.0
Jan 22 10:27:45 np0005592157 elastic_lehmann[353105]: --> All data devices are unavailable
Jan 22 10:27:45 np0005592157 systemd[1]: libpod-7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003.scope: Deactivated successfully.
Jan 22 10:27:45 np0005592157 podman[353088]: 2026-01-22 15:27:45.602169382 +0000 UTC m=+1.028165624 container died 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:27:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-820651ffce0546d8e951bea1c453a3e6ec8a2554726c08481a145a551a146d9e-merged.mount: Deactivated successfully.
Jan 22 10:27:45 np0005592157 podman[353088]: 2026-01-22 15:27:45.6577072 +0000 UTC m=+1.083703432 container remove 7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:27:45 np0005592157 systemd[1]: libpod-conmon-7182222a9c827c0030cc80e0180cabe1ad924a26f2702d5980768830d8316003.scope: Deactivated successfully.
Jan 22 10:27:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:45.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:45 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.222508673 +0000 UTC m=+0.054764861 container create 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:27:46 np0005592157 systemd[1]: Started libpod-conmon-63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504.scope.
Jan 22 10:27:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.197734612 +0000 UTC m=+0.029990860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.29365146 +0000 UTC m=+0.125907638 container init 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.301588473 +0000 UTC m=+0.133844671 container start 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.306246166 +0000 UTC m=+0.138502324 container attach 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:27:46 np0005592157 loving_shaw[353291]: 167 167
Jan 22 10:27:46 np0005592157 systemd[1]: libpod-63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504.scope: Deactivated successfully.
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.308345777 +0000 UTC m=+0.140601975 container died 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:27:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-600e26f9f625182c63bb43ddd2c2d5711202189d1af22d90324f05516c86d916-merged.mount: Deactivated successfully.
Jan 22 10:27:46 np0005592157 podman[353274]: 2026-01-22 15:27:46.351873754 +0000 UTC m=+0.184129952 container remove 63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:27:46 np0005592157 systemd[1]: libpod-conmon-63a8896724a2b5fe2a9008d8938b1c8c809016449c2125d58b6a2a906c86e504.scope: Deactivated successfully.
Jan 22 10:27:46 np0005592157 podman[353313]: 2026-01-22 15:27:46.575047052 +0000 UTC m=+0.066678250 container create 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:27:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:27:46 np0005592157 systemd[1]: Started libpod-conmon-4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed.scope.
Jan 22 10:27:46 np0005592157 podman[353313]: 2026-01-22 15:27:46.549044751 +0000 UTC m=+0.040675949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:46 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfad1016ce1602a3f01e2a6d5718ab68d15d33edb6db3032cc38ab576be8319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfad1016ce1602a3f01e2a6d5718ab68d15d33edb6db3032cc38ab576be8319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfad1016ce1602a3f01e2a6d5718ab68d15d33edb6db3032cc38ab576be8319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:46 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfad1016ce1602a3f01e2a6d5718ab68d15d33edb6db3032cc38ab576be8319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:46 np0005592157 podman[353313]: 2026-01-22 15:27:46.691996132 +0000 UTC m=+0.183627330 container init 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:27:46 np0005592157 podman[353313]: 2026-01-22 15:27:46.701659906 +0000 UTC m=+0.193291084 container start 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:27:46 np0005592157 podman[353313]: 2026-01-22 15:27:46.7063343 +0000 UTC m=+0.197965478 container attach 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:27:47 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]: {
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:    "0": [
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:        {
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "devices": [
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "/dev/loop3"
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            ],
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "lv_name": "ceph_lv0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "lv_size": "7511998464",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "name": "ceph_lv0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "tags": {
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.cluster_name": "ceph",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.crush_device_class": "",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.encrypted": "0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.osd_id": "0",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.type": "block",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:                "ceph.vdo": "0"
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            },
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "type": "block",
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:            "vg_name": "ceph_vg0"
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:        }
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]:    ]
Jan 22 10:27:47 np0005592157 keen_sutherland[353331]: }
Jan 22 10:27:47 np0005592157 systemd[1]: libpod-4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed.scope: Deactivated successfully.
Jan 22 10:27:47 np0005592157 conmon[353331]: conmon 4bf3cbe62c63dd31a25e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed.scope/container/memory.events
Jan 22 10:27:47 np0005592157 podman[353313]: 2026-01-22 15:27:47.511106959 +0000 UTC m=+1.002738127 container died 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:27:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-acfad1016ce1602a3f01e2a6d5718ab68d15d33edb6db3032cc38ab576be8319-merged.mount: Deactivated successfully.
Jan 22 10:27:47 np0005592157 podman[353313]: 2026-01-22 15:27:47.565456479 +0000 UTC m=+1.057087647 container remove 4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:27:47 np0005592157 systemd[1]: libpod-conmon-4bf3cbe62c63dd31a25e320341de9f5d31e23a2f57ac1b3c50804870ffd926ed.scope: Deactivated successfully.
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:27:47
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'images', '.mgr', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:27:47.667 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:27:47.668 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:27:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:27:47.668 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:27:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:47.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:48 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:48 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.133250185 +0000 UTC m=+0.036343024 container create 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:27:48 np0005592157 systemd[1]: Started libpod-conmon-540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407.scope.
Jan 22 10:27:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.198726504 +0000 UTC m=+0.101819373 container init 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.20635107 +0000 UTC m=+0.109443909 container start 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:27:48 np0005592157 nostalgic_dirac[353509]: 167 167
Jan 22 10:27:48 np0005592157 systemd[1]: libpod-540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407.scope: Deactivated successfully.
Jan 22 10:27:48 np0005592157 conmon[353509]: conmon 540be1f38a7b05da4ce3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407.scope/container/memory.events
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.117779039 +0000 UTC m=+0.020871878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.372476813 +0000 UTC m=+0.275569652 container attach 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.373304333 +0000 UTC m=+0.276397172 container died 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 22 10:27:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1e523fb4518d379a3317442fedd14bf43eb201fdd4848583bbb6cb14736412d2-merged.mount: Deactivated successfully.
Jan 22 10:27:48 np0005592157 podman[353492]: 2026-01-22 15:27:48.4168551 +0000 UTC m=+0.319947959 container remove 540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dirac, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:27:48 np0005592157 systemd[1]: libpod-conmon-540be1f38a7b05da4ce3e798fc14c38561269fd38700da34b659b859520aa407.scope: Deactivated successfully.
Jan 22 10:27:48 np0005592157 podman[353534]: 2026-01-22 15:27:48.550194488 +0000 UTC m=+0.020891079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:27:48 np0005592157 podman[353534]: 2026-01-22 15:27:48.960605231 +0000 UTC m=+0.431301792 container create dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:27:49 np0005592157 systemd[1]: Started libpod-conmon-dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a.scope.
Jan 22 10:27:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:27:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7f6d1453b78fe2615ab0a827ade366a5eec83555454069afbd458099418656/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7f6d1453b78fe2615ab0a827ade366a5eec83555454069afbd458099418656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7f6d1453b78fe2615ab0a827ade366a5eec83555454069afbd458099418656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7f6d1453b78fe2615ab0a827ade366a5eec83555454069afbd458099418656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:27:49 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:49 np0005592157 podman[353534]: 2026-01-22 15:27:49.085093713 +0000 UTC m=+0.555790314 container init dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:27:49 np0005592157 podman[353534]: 2026-01-22 15:27:49.093498007 +0000 UTC m=+0.564194578 container start dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:27:49 np0005592157 podman[353534]: 2026-01-22 15:27:49.097025963 +0000 UTC m=+0.567722524 container attach dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:27:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]: {
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:        "osd_id": 0,
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:        "type": "bluestore"
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]:    }
Jan 22 10:27:49 np0005592157 jolly_feistel[353550]: }
Jan 22 10:27:49 np0005592157 systemd[1]: libpod-dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a.scope: Deactivated successfully.
Jan 22 10:27:49 np0005592157 podman[353572]: 2026-01-22 15:27:49.917728609 +0000 UTC m=+0.020928349 container died dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:27:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-af7f6d1453b78fe2615ab0a827ade366a5eec83555454069afbd458099418656-merged.mount: Deactivated successfully.
Jan 22 10:27:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:49.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:49 np0005592157 podman[353572]: 2026-01-22 15:27:49.972010967 +0000 UTC m=+0.075210697 container remove dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:27:49 np0005592157 systemd[1]: libpod-conmon-dbf649db4a76004ba5cc7f4d02a20ecf91a5d877fc5a9f6ee1c2031fbb5a995a.scope: Deactivated successfully.
Jan 22 10:27:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:49.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 11afba95-e955-477b-904a-e704f4b5c414 does not exist
Jan 22 10:27:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f3f418d3-568c-4086-8bc4-fc7324d114c9 does not exist
Jan 22 10:27:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5454450c-13f7-423a-afd9-8b261f32bdb6 does not exist
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:51 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:51.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:51.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:52 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:53 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:53.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:54 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:27:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:27:56 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:58 np0005592157 ceph-mon[74359]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:58 np0005592157 ceph-mon[74359]: Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:59 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:27:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:27:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:59.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:27:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:59.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:00 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592157 podman[353692]: 2026-01-22 15:28:01.331127267 +0000 UTC m=+0.063040452 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:28:01 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:01.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:02 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:03 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:04 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:28:05 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:06 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:07.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:08 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:08 np0005592157 podman[353717]: 2026-01-22 15:28:08.350287867 +0000 UTC m=+0.085518278 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:28:09 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:09 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:09.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:10 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:11.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:12.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:12 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:13 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:13.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:15 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:16 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:16 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:17.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:18.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:18 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:18 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:19.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:21 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:21.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:22.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:22 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:23 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:23.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:24 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:25 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:25.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:27 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:28 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:28 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:29 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:29.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:30 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:31 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:31.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:32 np0005592157 podman[353855]: 2026-01-22 15:28:32.311954147 +0000 UTC m=+0.050402195 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 22 10:28:32 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:33.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:34 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:34 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:35 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:36.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:36 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:36 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:38.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:38 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:38 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:39 np0005592157 podman[353878]: 2026-01-22 15:28:39.339197513 +0000 UTC m=+0.081018888 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:28:39 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:40.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:40 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:40 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:42.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #231. Immutable memtables: 0.
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.477679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 231
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723477733, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 2456, "num_deletes": 542, "total_data_size": 3243505, "memory_usage": 3302336, "flush_reason": "Manual Compaction"}
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #232: started
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723744021, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 232, "file_size": 3165791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 101773, "largest_seqno": 104228, "table_properties": {"data_size": 3155857, "index_size": 5403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31224, "raw_average_key_size": 22, "raw_value_size": 3131924, "raw_average_value_size": 2304, "num_data_blocks": 228, "num_entries": 1359, "num_filter_entries": 1359, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095554, "oldest_key_time": 1769095554, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 232, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 266742 microseconds, and 7329 cpu microseconds.
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.744388) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #232: 3165791 bytes OK
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.744451) [db/memtable_list.cc:519] [default] Level-0 commit table #232 started
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.943421) [db/memtable_list.cc:722] [default] Level-0 commit table #232: memtable #1 done
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.943496) EVENT_LOG_v1 {"time_micros": 1769095723943480, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.943535) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 3232059, prev total WAL file size 3234264, number of live WAL files 2.
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000228.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.945547) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035323836' seq:72057594037927935, type:22 .. '6C6F676D0035353339' seq:0, type:0; will stop at (end)
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [232(3091KB)], [230(10MB)]
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723945641, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [232], "files_L6": [230], "score": -1, "input_data_size": 14001292, "oldest_snapshot_seqno": -1}
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:43 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:44.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #233: 14430 keys, 13749727 bytes, temperature: kUnknown
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095724251642, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 233, "file_size": 13749727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13669409, "index_size": 43149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 395924, "raw_average_key_size": 27, "raw_value_size": 13422699, "raw_average_value_size": 930, "num_data_blocks": 1574, "num_entries": 14430, "num_filter_entries": 14430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.252132) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 13749727 bytes
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.253993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.7 rd, 44.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.3) OK, records in: 15529, records dropped: 1099 output_compression: NoCompression
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254023) EVENT_LOG_v1 {"time_micros": 1769095724254010, "job": 146, "event": "compaction_finished", "compaction_time_micros": 306142, "compaction_time_cpu_micros": 33393, "output_level": 6, "num_output_files": 1, "total_output_size": 13749727, "num_input_records": 15529, "num_output_records": 14430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:43.945451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:44.254128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000232.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095724256427, "job": 0, "event": "table_file_deletion", "file_number": 232}
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:44 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095724259713, "job": 0, "event": "table_file_deletion", "file_number": 230}
Jan 22 10:28:45 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:28:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #234. Immutable memtables: 0.
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.881487) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 234
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726881557, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 303, "num_deletes": 258, "total_data_size": 84631, "memory_usage": 91672, "flush_reason": "Manual Compaction"}
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #235: started
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726885412, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 235, "file_size": 83851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 104229, "largest_seqno": 104531, "table_properties": {"data_size": 81873, "index_size": 141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5347, "raw_average_key_size": 18, "raw_value_size": 77924, "raw_average_value_size": 274, "num_data_blocks": 6, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095723, "oldest_key_time": 1769095723, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 235, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 3981 microseconds, and 1783 cpu microseconds.
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.885470) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #235: 83851 bytes OK
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.885496) [db/memtable_list.cc:519] [default] Level-0 commit table #235 started
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.887384) [db/memtable_list.cc:722] [default] Level-0 commit table #235: memtable #1 done
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.887412) EVENT_LOG_v1 {"time_micros": 1769095726887404, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.887435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 82408, prev total WAL file size 82408, number of live WAL files 2.
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000231.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.888076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [235(81KB)], [233(13MB)]
Jan 22 10:28:46 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726888135, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [235], "files_L6": [233], "score": -1, "input_data_size": 13833578, "oldest_snapshot_seqno": -1}
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #236: 14190 keys, 12058620 bytes, temperature: kUnknown
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727176163, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 236, "file_size": 12058620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11981185, "index_size": 40842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 391611, "raw_average_key_size": 27, "raw_value_size": 11739840, "raw_average_value_size": 827, "num_data_blocks": 1472, "num_entries": 14190, "num_filter_entries": 14190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.176440) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 12058620 bytes
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.177842) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.0 rd, 41.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(308.8) write-amplify(143.8) OK, records in: 14714, records dropped: 524 output_compression: NoCompression
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.177859) EVENT_LOG_v1 {"time_micros": 1769095727177851, "job": 148, "event": "compaction_finished", "compaction_time_micros": 288109, "compaction_time_cpu_micros": 46166, "output_level": 6, "num_output_files": 1, "total_output_size": 12058620, "num_input_records": 14714, "num_output_records": 14190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000235.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727178102, "job": 148, "event": "table_file_deletion", "file_number": 235}
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727180234, "job": 148, "event": "table_file_deletion", "file_number": 233}
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:46.887964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.180355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.180362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.180364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.180365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:28:47.180367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:28:47
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta']
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:28:47.668 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:28:47.669 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:28:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:28:47.670 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:28:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:48 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:48 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:48 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:50.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:50 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:28:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:52.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:28:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:28:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f1e145a0-b9df-40cb-a3a7-9ad2f4d7edd8 does not exist
Jan 22 10:28:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f795e084-b210-4613-8313-696bc067c1a1 does not exist
Jan 22 10:28:52 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6b214439-967b-4ae0-814c-a7be190c34e1 does not exist
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:28:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:28:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:53 np0005592157 podman[354350]: 2026-01-22 15:28:53.140867747 +0000 UTC m=+0.020860567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:28:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:54.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:54 np0005592157 podman[354350]: 2026-01-22 15:28:54.048831112 +0000 UTC m=+0.928823912 container create 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:28:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:28:54 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:28:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:54 np0005592157 systemd[1]: Started libpod-conmon-5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c.scope.
Jan 22 10:28:54 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:28:54 np0005592157 podman[354350]: 2026-01-22 15:28:54.795727036 +0000 UTC m=+1.675719926 container init 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:28:54 np0005592157 podman[354350]: 2026-01-22 15:28:54.810665568 +0000 UTC m=+1.690658408 container start 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:28:54 np0005592157 nice_varahamihira[354368]: 167 167
Jan 22 10:28:54 np0005592157 systemd[1]: libpod-5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c.scope: Deactivated successfully.
Jan 22 10:28:55 np0005592157 podman[354350]: 2026-01-22 15:28:55.064913161 +0000 UTC m=+1.944905961 container attach 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:28:55 np0005592157 podman[354350]: 2026-01-22 15:28:55.065446024 +0000 UTC m=+1.945438824 container died 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:28:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-731b5388b33901f33d158485e2f96da06d3f760cbe4c67a39b256393a0786791-merged.mount: Deactivated successfully.
Jan 22 10:28:55 np0005592157 podman[354350]: 2026-01-22 15:28:55.276486138 +0000 UTC m=+2.156478968 container remove 5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 10:28:55 np0005592157 systemd[1]: libpod-conmon-5ba442487f3e80ba6f48b6965e508c7f2c82c36bd67860cf70f899569963581c.scope: Deactivated successfully.
Jan 22 10:28:55 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:55 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:55 np0005592157 podman[354392]: 2026-01-22 15:28:55.482872279 +0000 UTC m=+0.061219367 container create e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:28:55 np0005592157 podman[354392]: 2026-01-22 15:28:55.454599433 +0000 UTC m=+0.032946531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:28:55 np0005592157 systemd[1]: Started libpod-conmon-e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882.scope.
Jan 22 10:28:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:28:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:55 np0005592157 podman[354392]: 2026-01-22 15:28:55.785653621 +0000 UTC m=+0.364000749 container init e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:28:55 np0005592157 podman[354392]: 2026-01-22 15:28:55.793701266 +0000 UTC m=+0.372048384 container start e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:28:55 np0005592157 podman[354392]: 2026-01-22 15:28:55.797901868 +0000 UTC m=+0.376248986 container attach e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:28:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:56.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:56 np0005592157 compassionate_vaughan[354409]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:28:56 np0005592157 compassionate_vaughan[354409]: --> relative data size: 1.0
Jan 22 10:28:56 np0005592157 compassionate_vaughan[354409]: --> All data devices are unavailable
Jan 22 10:28:56 np0005592157 systemd[1]: libpod-e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882.scope: Deactivated successfully.
Jan 22 10:28:56 np0005592157 podman[354392]: 2026-01-22 15:28:56.604516632 +0000 UTC m=+1.182863720 container died e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 22 10:28:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cf81af0790c3102b8883baff80cacd790ecdca9d7ef29ab54f284328b8cf7bb9-merged.mount: Deactivated successfully.
Jan 22 10:28:56 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:56 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592157 podman[354392]: 2026-01-22 15:28:57.110075176 +0000 UTC m=+1.688422264 container remove e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:28:57 np0005592157 systemd[1]: libpod-conmon-e35f2c5991b3c54ef209af73827f36062b80c31cfd150fe4960cc9f9fde76882.scope: Deactivated successfully.
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.720303712 +0000 UTC m=+0.036244681 container create da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:28:57 np0005592157 systemd[1]: Started libpod-conmon-da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0.scope.
Jan 22 10:28:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.787121784 +0000 UTC m=+0.103062753 container init da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.792878104 +0000 UTC m=+0.108819073 container start da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.796689037 +0000 UTC m=+0.112630006 container attach da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:28:57 np0005592157 sharp_hawking[354593]: 167 167
Jan 22 10:28:57 np0005592157 systemd[1]: libpod-da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0.scope: Deactivated successfully.
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.797595539 +0000 UTC m=+0.113536508 container died da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.706571148 +0000 UTC m=+0.022512137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:28:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-82d5807a8d885cb86ade281804a04a72d18f55eb356cc5791fe1a158803a1054-merged.mount: Deactivated successfully.
Jan 22 10:28:57 np0005592157 podman[354577]: 2026-01-22 15:28:57.835579941 +0000 UTC m=+0.151520910 container remove da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:28:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:28:57 np0005592157 systemd[1]: libpod-conmon-da8c5c164909cb0cee63e44a8c1494a0c1ce41eb335075eea8feb5208c2e32f0.scope: Deactivated successfully.
Jan 22 10:28:57 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:58.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:58 np0005592157 podman[354619]: 2026-01-22 15:28:57.972755661 +0000 UTC m=+0.025213443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:28:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:28:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:28:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:28:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:58 np0005592157 podman[354619]: 2026-01-22 15:28:58.271099965 +0000 UTC m=+0.323557727 container create 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:28:58 np0005592157 systemd[1]: Started libpod-conmon-6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9.scope.
Jan 22 10:28:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:28:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6520e99013859b847075bd7123d12456b01522703afa5a8ad296b13396502ebf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6520e99013859b847075bd7123d12456b01522703afa5a8ad296b13396502ebf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6520e99013859b847075bd7123d12456b01522703afa5a8ad296b13396502ebf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6520e99013859b847075bd7123d12456b01522703afa5a8ad296b13396502ebf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:28:59 np0005592157 podman[354619]: 2026-01-22 15:28:59.296744466 +0000 UTC m=+1.349202238 container init 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:28:59 np0005592157 podman[354619]: 2026-01-22 15:28:59.308556573 +0000 UTC m=+1.361014335 container start 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:28:59 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:59 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:59 np0005592157 podman[354619]: 2026-01-22 15:28:59.62680805 +0000 UTC m=+1.679265862 container attach 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:28:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:00 np0005592157 keen_brown[354636]: {
Jan 22 10:29:00 np0005592157 keen_brown[354636]:    "0": [
Jan 22 10:29:00 np0005592157 keen_brown[354636]:        {
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "devices": [
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "/dev/loop3"
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            ],
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "lv_name": "ceph_lv0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "lv_size": "7511998464",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "name": "ceph_lv0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "tags": {
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.cluster_name": "ceph",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.crush_device_class": "",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.encrypted": "0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.osd_id": "0",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.type": "block",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:                "ceph.vdo": "0"
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            },
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "type": "block",
Jan 22 10:29:00 np0005592157 keen_brown[354636]:            "vg_name": "ceph_vg0"
Jan 22 10:29:00 np0005592157 keen_brown[354636]:        }
Jan 22 10:29:00 np0005592157 keen_brown[354636]:    ]
Jan 22 10:29:00 np0005592157 keen_brown[354636]: }
Jan 22 10:29:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:00.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:00 np0005592157 systemd[1]: libpod-6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9.scope: Deactivated successfully.
Jan 22 10:29:00 np0005592157 podman[354619]: 2026-01-22 15:29:00.08437643 +0000 UTC m=+2.136834192 container died 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:29:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6520e99013859b847075bd7123d12456b01522703afa5a8ad296b13396502ebf-merged.mount: Deactivated successfully.
Jan 22 10:29:00 np0005592157 podman[354619]: 2026-01-22 15:29:00.148658931 +0000 UTC m=+2.201116683 container remove 6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brown, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:29:00 np0005592157 systemd[1]: libpod-conmon-6a24d30a3e28e3064f83a5807fe5db2c249371f5a6d33653716cd916cfed4ec9.scope: Deactivated successfully.
Jan 22 10:29:00 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.71335959 +0000 UTC m=+0.037211124 container create 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:29:00 np0005592157 systemd[1]: Started libpod-conmon-1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675.scope.
Jan 22 10:29:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.696800128 +0000 UTC m=+0.020651682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.79244134 +0000 UTC m=+0.116292894 container init 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.797893763 +0000 UTC m=+0.121745297 container start 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.801417928 +0000 UTC m=+0.125269462 container attach 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:29:00 np0005592157 keen_mclaren[354817]: 167 167
Jan 22 10:29:00 np0005592157 systemd[1]: libpod-1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675.scope: Deactivated successfully.
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.802918005 +0000 UTC m=+0.126769539 container died 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:29:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-372435482a738e0bcd74fd8d9e980e7723d40b78c71b38641ad45dafbf3f400b-merged.mount: Deactivated successfully.
Jan 22 10:29:00 np0005592157 podman[354801]: 2026-01-22 15:29:00.845088008 +0000 UTC m=+0.168939542 container remove 1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:29:00 np0005592157 systemd[1]: libpod-conmon-1c0bf64e6e210c0250f3f456f10828fe537409a72461e8586a7a798d80465675.scope: Deactivated successfully.
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.039690553 +0000 UTC m=+0.038344672 container create 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:29:01 np0005592157 systemd[1]: Started libpod-conmon-6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5.scope.
Jan 22 10:29:01 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:29:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0c33639e613706175dd15b3a82f85df17a9bd3221fd8661764758ca2edec42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:29:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0c33639e613706175dd15b3a82f85df17a9bd3221fd8661764758ca2edec42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:29:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0c33639e613706175dd15b3a82f85df17a9bd3221fd8661764758ca2edec42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:29:01 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0c33639e613706175dd15b3a82f85df17a9bd3221fd8661764758ca2edec42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.023772077 +0000 UTC m=+0.022426216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.12811353 +0000 UTC m=+0.126767669 container init 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.135732875 +0000 UTC m=+0.134387004 container start 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.139329182 +0000 UTC m=+0.137983321 container attach 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:29:01 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]: {
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:        "osd_id": 0,
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:        "type": "bluestore"
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]:    }
Jan 22 10:29:01 np0005592157 trusting_khayyam[354858]: }
Jan 22 10:29:01 np0005592157 systemd[1]: libpod-6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5.scope: Deactivated successfully.
Jan 22 10:29:01 np0005592157 podman[354841]: 2026-01-22 15:29:01.999168459 +0000 UTC m=+0.997822578 container died 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:29:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:02 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7e0c33639e613706175dd15b3a82f85df17a9bd3221fd8661764758ca2edec42-merged.mount: Deactivated successfully.
Jan 22 10:29:02 np0005592157 podman[354841]: 2026-01-22 15:29:02.061294457 +0000 UTC m=+1.059948576 container remove 6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:29:02 np0005592157 systemd[1]: libpod-conmon-6a71ee3d021546c63ace09ad774596060b06d88abc872e534069922fd06516c5.scope: Deactivated successfully.
Jan 22 10:29:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:02.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev abc703cf-fbc8-4f8e-ac13-38a5a79337a7 does not exist
Jan 22 10:29:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 09292772-7e4f-4e8d-b739-7eb9f6d7b0f5 does not exist
Jan 22 10:29:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev df122fd5-699f-4374-82c0-106a361abe12 does not exist
Jan 22 10:29:02 np0005592157 podman[354940]: 2026-01-22 15:29:02.475000041 +0000 UTC m=+0.077956033 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:04.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:04 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:04 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:04.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:29:05 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:06.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:07 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:08.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:08.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:08 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:08 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:09 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:10.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:10 np0005592157 podman[354965]: 2026-01-22 15:29:10.179920992 +0000 UTC m=+0.107623374 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:29:11 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:12.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:12 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:13 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:14.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:14 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:14 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:15 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:17 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:17 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:18.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:18.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:29:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:29:19 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:20.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:20 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:22.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:22.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:22 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:22 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:23 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:23 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:24.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:24 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:25 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:26.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:26 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:28.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:28 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:30.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:30 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:31 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:32.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:33 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:33 np0005592157 podman[355102]: 2026-01-22 15:29:33.307850897 +0000 UTC m=+0.050067277 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:29:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:34.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:34 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:34 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:35 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:29:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:29:36 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:36 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:38.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:38 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:40 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:40.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:40 np0005592157 podman[355126]: 2026-01-22 15:29:40.395719975 +0000 UTC m=+0.115289611 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:29:41 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:42.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:42 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:43 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:43 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:44 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:45 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:46.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:29:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:29:47 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:29:47
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.control', '.mgr']
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:29:47.669 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:29:47.670 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:29:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:29:47.670 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:29:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:48.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:48 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:48 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:48 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:50.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:50.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:50 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:52.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:29:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:52.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:29:52 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:53 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:54.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:54 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:55 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:56.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:56.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:56 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:56 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:29:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:58.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:29:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:58.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:58 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:58 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:00.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:00.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:02.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:02.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:02 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:30:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:04.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev dfef3dba-a237-438a-a8c1-65921ec54fc0 does not exist
Jan 22 10:30:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a26ef538-5914-4356-b079-1ea51544aa65 does not exist
Jan 22 10:30:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a911533a-8461-45cc-a045-f1cd9b4950b4 does not exist
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:30:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:30:04 np0005592157 podman[355345]: 2026-01-22 15:30:04.373122975 +0000 UTC m=+0.096050223 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:30:04 np0005592157 podman[355504]: 2026-01-22 15:30:04.882526722 +0000 UTC m=+0.020270544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.060204426 +0000 UTC m=+0.197948218 container create f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:30:05 np0005592157 systemd[1]: Started libpod-conmon-f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6.scope.
Jan 22 10:30:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.23008819 +0000 UTC m=+0.367832012 container init f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.240631166 +0000 UTC m=+0.378374958 container start f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:30:05 np0005592157 eloquent_agnesi[355520]: 167 167
Jan 22 10:30:05 np0005592157 systemd[1]: libpod-f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6.scope: Deactivated successfully.
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.349324645 +0000 UTC m=+0.487068457 container attach f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.35034364 +0000 UTC m=+0.488087432 container died f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:30:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:30:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:05 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:30:05 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-820df7fd6deaec91cd1a19c690b570bc783b97a84e062509497581693cdb853e-merged.mount: Deactivated successfully.
Jan 22 10:30:05 np0005592157 podman[355504]: 2026-01-22 15:30:05.463226631 +0000 UTC m=+0.600970413 container remove f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_agnesi, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:30:05 np0005592157 systemd[1]: libpod-conmon-f21ae4bc9d3e8722316cbf8117a66861c1fe7a9c48f77a4abd460219a91091b6.scope: Deactivated successfully.
Jan 22 10:30:05 np0005592157 podman[355545]: 2026-01-22 15:30:05.616562994 +0000 UTC m=+0.041076609 container create 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:30:05 np0005592157 systemd[1]: Started libpod-conmon-53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53.scope.
Jan 22 10:30:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:05 np0005592157 podman[355545]: 2026-01-22 15:30:05.677908153 +0000 UTC m=+0.102421818 container init 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 22 10:30:05 np0005592157 podman[355545]: 2026-01-22 15:30:05.685707702 +0000 UTC m=+0.110221307 container start 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:30:05 np0005592157 podman[355545]: 2026-01-22 15:30:05.6889121 +0000 UTC m=+0.113425705 container attach 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 22 10:30:05 np0005592157 podman[355545]: 2026-01-22 15:30:05.597237764 +0000 UTC m=+0.021751389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:06.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:06 np0005592157 pensive_carver[355561]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:30:06 np0005592157 pensive_carver[355561]: --> relative data size: 1.0
Jan 22 10:30:06 np0005592157 pensive_carver[355561]: --> All data devices are unavailable
Jan 22 10:30:06 np0005592157 systemd[1]: libpod-53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53.scope: Deactivated successfully.
Jan 22 10:30:06 np0005592157 podman[355545]: 2026-01-22 15:30:06.479376382 +0000 UTC m=+0.903889987 container died 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:30:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0de13707c3ade9c4a2798934005bce3b22fde2097310ecfb388971cc00d6cec2-merged.mount: Deactivated successfully.
Jan 22 10:30:06 np0005592157 podman[355545]: 2026-01-22 15:30:06.531039847 +0000 UTC m=+0.955553452 container remove 53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:30:06 np0005592157 systemd[1]: libpod-conmon-53e345d32732abb0995d6f36658c9049a2ad44f13792b6239cea858d4edf5b53.scope: Deactivated successfully.
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.062525911 +0000 UTC m=+0.037696496 container create ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:30:07 np0005592157 systemd[1]: Started libpod-conmon-ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f.scope.
Jan 22 10:30:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.046199364 +0000 UTC m=+0.021369979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.142780639 +0000 UTC m=+0.117951324 container init ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.155204561 +0000 UTC m=+0.130375186 container start ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:30:07 np0005592157 friendly_ardinghelli[355748]: 167 167
Jan 22 10:30:07 np0005592157 systemd[1]: libpod-ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f.scope: Deactivated successfully.
Jan 22 10:30:07 np0005592157 conmon[355748]: conmon ebf0eb8f98f1d7449fa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f.scope/container/memory.events
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.161963985 +0000 UTC m=+0.137134680 container attach ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.162496468 +0000 UTC m=+0.137667103 container died ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:30:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-025a68ad5424d6314c2ae1bbaf00d3d0ce488915d475612428135fd837afae27-merged.mount: Deactivated successfully.
Jan 22 10:30:07 np0005592157 podman[355731]: 2026-01-22 15:30:07.210867272 +0000 UTC m=+0.186037917 container remove ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:30:07 np0005592157 systemd[1]: libpod-conmon-ebf0eb8f98f1d7449fa93bf4014e44877458be0948db0b4b1bef56655961116f.scope: Deactivated successfully.
Jan 22 10:30:07 np0005592157 podman[355772]: 2026-01-22 15:30:07.410841568 +0000 UTC m=+0.050759024 container create f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:30:07 np0005592157 systemd[1]: Started libpod-conmon-f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687.scope.
Jan 22 10:30:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8329b38496440f3d49d85e472b7ccac12f5b627489a01d4e0a757d0c017dc7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8329b38496440f3d49d85e472b7ccac12f5b627489a01d4e0a757d0c017dc7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8329b38496440f3d49d85e472b7ccac12f5b627489a01d4e0a757d0c017dc7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8329b38496440f3d49d85e472b7ccac12f5b627489a01d4e0a757d0c017dc7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:07 np0005592157 podman[355772]: 2026-01-22 15:30:07.473749095 +0000 UTC m=+0.113666581 container init f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 10:30:07 np0005592157 podman[355772]: 2026-01-22 15:30:07.481616956 +0000 UTC m=+0.121534422 container start f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 10:30:07 np0005592157 podman[355772]: 2026-01-22 15:30:07.388892885 +0000 UTC m=+0.028810371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:07 np0005592157 podman[355772]: 2026-01-22 15:30:07.487099639 +0000 UTC m=+0.127017145 container attach f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:30:07 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:08.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]: {
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:    "0": [
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:        {
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "devices": [
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "/dev/loop3"
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            ],
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "lv_name": "ceph_lv0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "lv_size": "7511998464",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "name": "ceph_lv0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "tags": {
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.cluster_name": "ceph",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.crush_device_class": "",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.encrypted": "0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.osd_id": "0",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.type": "block",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:                "ceph.vdo": "0"
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            },
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "type": "block",
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:            "vg_name": "ceph_vg0"
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:        }
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]:    ]
Jan 22 10:30:08 np0005592157 ecstatic_tesla[355788]: }
Jan 22 10:30:08 np0005592157 systemd[1]: libpod-f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687.scope: Deactivated successfully.
Jan 22 10:30:08 np0005592157 podman[355772]: 2026-01-22 15:30:08.246314182 +0000 UTC m=+0.886231668 container died f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:30:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e8329b38496440f3d49d85e472b7ccac12f5b627489a01d4e0a757d0c017dc7b-merged.mount: Deactivated successfully.
Jan 22 10:30:08 np0005592157 podman[355772]: 2026-01-22 15:30:08.304208027 +0000 UTC m=+0.944125493 container remove f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tesla, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:30:08 np0005592157 systemd[1]: libpod-conmon-f41e512f30bcee30dac8bd075f3d9579c069875b80beb7fa2a55442cb93a7687.scope: Deactivated successfully.
Jan 22 10:30:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:08 np0005592157 podman[355953]: 2026-01-22 15:30:08.926190149 +0000 UTC m=+0.045625219 container create 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:30:08 np0005592157 systemd[1]: Started libpod-conmon-787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511.scope.
Jan 22 10:30:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:08.908872178 +0000 UTC m=+0.028307298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:09.008452186 +0000 UTC m=+0.127887276 container init 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:09.01521096 +0000 UTC m=+0.134646030 container start 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:09.018035709 +0000 UTC m=+0.137470779 container attach 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:30:09 np0005592157 youthful_gagarin[355970]: 167 167
Jan 22 10:30:09 np0005592157 systemd[1]: libpod-787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511.scope: Deactivated successfully.
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:09.020236622 +0000 UTC m=+0.139671692 container died 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:30:09 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7190a641969e11d35aa55ec397b6f99320fc404a3ecd1db52d7c118982c4859c-merged.mount: Deactivated successfully.
Jan 22 10:30:09 np0005592157 podman[355953]: 2026-01-22 15:30:09.060644503 +0000 UTC m=+0.180079573 container remove 787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:30:09 np0005592157 systemd[1]: libpod-conmon-787a862c6e6c90097fee4e8b98995bb4b2b5d7c29526bc976b97a27359499511.scope: Deactivated successfully.
Jan 22 10:30:09 np0005592157 podman[355994]: 2026-01-22 15:30:09.22361646 +0000 UTC m=+0.044931022 container create 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:30:09 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:09 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:09 np0005592157 systemd[1]: Started libpod-conmon-6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f.scope.
Jan 22 10:30:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:30:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5233618a1ce26285ad3644ead3e46a87cf9e77bbc7dfcad2d865b009fef355e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5233618a1ce26285ad3644ead3e46a87cf9e77bbc7dfcad2d865b009fef355e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5233618a1ce26285ad3644ead3e46a87cf9e77bbc7dfcad2d865b009fef355e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5233618a1ce26285ad3644ead3e46a87cf9e77bbc7dfcad2d865b009fef355e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:30:09 np0005592157 podman[355994]: 2026-01-22 15:30:09.206356461 +0000 UTC m=+0.027671033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:30:09 np0005592157 podman[355994]: 2026-01-22 15:30:09.305276393 +0000 UTC m=+0.126590965 container init 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:30:09 np0005592157 podman[355994]: 2026-01-22 15:30:09.310900239 +0000 UTC m=+0.132214791 container start 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:30:09 np0005592157 podman[355994]: 2026-01-22 15:30:09.314123257 +0000 UTC m=+0.135437809 container attach 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 10:30:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:10 np0005592157 funny_bassi[356010]: {
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:        "osd_id": 0,
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:        "type": "bluestore"
Jan 22 10:30:10 np0005592157 funny_bassi[356010]:    }
Jan 22 10:30:10 np0005592157 funny_bassi[356010]: }
Jan 22 10:30:10 np0005592157 systemd[1]: libpod-6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f.scope: Deactivated successfully.
Jan 22 10:30:10 np0005592157 podman[356033]: 2026-01-22 15:30:10.172938279 +0000 UTC m=+0.022822945 container died 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:30:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5233618a1ce26285ad3644ead3e46a87cf9e77bbc7dfcad2d865b009fef355e1-merged.mount: Deactivated successfully.
Jan 22 10:30:10 np0005592157 podman[356033]: 2026-01-22 15:30:10.223432985 +0000 UTC m=+0.073317641 container remove 6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bassi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:30:10 np0005592157 systemd[1]: libpod-conmon-6c852df0d005590f1cb1b68f52351436fd8b22dade08b7ab65cfdd3b77082f2f.scope: Deactivated successfully.
Jan 22 10:30:10 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:30:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:30:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0bb96606-1052-4837-a74d-96ff32667013 does not exist
Jan 22 10:30:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6781291d-61e6-4af2-9e6e-b842fb98870a does not exist
Jan 22 10:30:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev edc17264-b3bb-413d-abde-4f34b24ed26c does not exist
Jan 22 10:30:11 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:11 np0005592157 podman[356098]: 2026-01-22 15:30:11.340849075 +0000 UTC m=+0.081435179 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:30:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:12 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:13 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:14 np0005592157 ceph-mon[74359]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:14 np0005592157 ceph-mon[74359]: Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:15 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:16 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:17 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:18.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:30:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:30:19 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:20 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:21 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:22.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:23 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:24.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:24 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:24 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:24.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:25 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:26.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:26.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:26 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:28.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:28 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:28 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:29 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:30.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:30 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:31 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:32 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:34 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:34 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:35 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:35 np0005592157 podman[356237]: 2026-01-22 15:30:35.346767097 +0000 UTC m=+0.085062537 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 10:30:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:36.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:37 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:38 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:39 np0005592157 ceph-mon[74359]: 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:30:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:40 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:40 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:41 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:42 np0005592157 podman[356260]: 2026-01-22 15:30:42.331260053 +0000 UTC m=+0.073741662 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:30:43 np0005592157 ceph-mon[74359]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:44 np0005592157 ceph-mon[74359]: Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:44 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:45 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:30:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:46.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:30:46 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:30:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:30:47
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control']
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:30:47.670 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:30:47.671 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:30:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:30:47.671 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:30:47 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:48 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592157 ceph-mon[74359]: Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:48 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:49 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:50.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:50 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:51 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:53 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:54 np0005592157 ceph-mon[74359]: Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:54 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:55 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:30:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:56.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:30:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:56.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:56 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:58.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:30:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:58.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:58 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:58 np0005592157 ceph-mon[74359]: Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:30:59 np0005592157 ceph-mon[74359]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:59 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:00.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:01 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:02 np0005592157 ceph-mon[74359]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:02.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:03 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:04.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:04 np0005592157 ceph-mon[74359]: Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:04 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:31:05 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:06 np0005592157 podman[356349]: 2026-01-22 15:31:06.321680133 +0000 UTC m=+0.055269892 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 10:31:06 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:09 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:09 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:10.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:31:11 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:12.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:12.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:13 np0005592157 podman[356554]: 2026-01-22 15:31:13.352206167 +0000 UTC m=+0.090815066 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:31:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:14.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:14.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 20df470c-eed1-48f0-88e9-6806d185bac0 does not exist
Jan 22 10:31:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 19010d23-599c-4415-8842-485ff389dc5b does not exist
Jan 22 10:31:14 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9a2351de-5d05-4d4c-9cd3-aece11ff4e0d does not exist
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:14.989589092 +0000 UTC m=+0.024956147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.106634094 +0000 UTC m=+0.142001119 container create df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:31:15 np0005592157 systemd[1]: Started libpod-conmon-df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420.scope.
Jan 22 10:31:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.267567031 +0000 UTC m=+0.302934086 container init df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.275572495 +0000 UTC m=+0.310939520 container start df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.279767427 +0000 UTC m=+0.315134492 container attach df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:31:15 np0005592157 eager_joliot[356736]: 167 167
Jan 22 10:31:15 np0005592157 systemd[1]: libpod-df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420.scope: Deactivated successfully.
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.282178656 +0000 UTC m=+0.317545681 container died df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:31:15 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4d2326da5f757aa6102dfc9c88f1725df41656d4a189d8bddf5ad66dcc852497-merged.mount: Deactivated successfully.
Jan 22 10:31:15 np0005592157 podman[356720]: 2026-01-22 15:31:15.320411644 +0000 UTC m=+0.355778669 container remove df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:15 np0005592157 systemd[1]: libpod-conmon-df0f8c04b1b505c80dec3af7a5a580627777e3909b56c789e2ee0c2545d46420.scope: Deactivated successfully.
Jan 22 10:31:15 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:15 np0005592157 podman[356759]: 2026-01-22 15:31:15.499188305 +0000 UTC m=+0.044413640 container create 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:31:15 np0005592157 systemd[1]: Started libpod-conmon-2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1.scope.
Jan 22 10:31:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:15 np0005592157 podman[356759]: 2026-01-22 15:31:15.483111104 +0000 UTC m=+0.028336489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:15 np0005592157 podman[356759]: 2026-01-22 15:31:15.583885761 +0000 UTC m=+0.129111106 container init 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 10:31:15 np0005592157 podman[356759]: 2026-01-22 15:31:15.591800763 +0000 UTC m=+0.137026098 container start 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:31:15 np0005592157 podman[356759]: 2026-01-22 15:31:15.595196176 +0000 UTC m=+0.140421521 container attach 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:31:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:16.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:16 np0005592157 vigorous_borg[356776]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:31:16 np0005592157 vigorous_borg[356776]: --> relative data size: 1.0
Jan 22 10:31:16 np0005592157 vigorous_borg[356776]: --> All data devices are unavailable
Jan 22 10:31:16 np0005592157 systemd[1]: libpod-2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1.scope: Deactivated successfully.
Jan 22 10:31:16 np0005592157 podman[356759]: 2026-01-22 15:31:16.347882099 +0000 UTC m=+0.893107434 container died 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:31:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a836ffd277016b2c8d3434ff1dfca135bcce2d809779c1f0d8592e334be048b5-merged.mount: Deactivated successfully.
Jan 22 10:31:16 np0005592157 podman[356759]: 2026-01-22 15:31:16.396321706 +0000 UTC m=+0.941547071 container remove 2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_borg, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 10:31:16 np0005592157 systemd[1]: libpod-conmon-2cd4dfc61c1f86cde429675900793e562ee298d91166ef0c1ffde2a9fba1c2f1.scope: Deactivated successfully.
Jan 22 10:31:16 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.10944723 +0000 UTC m=+0.045681330 container create 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:31:17 np0005592157 systemd[1]: Started libpod-conmon-225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625.scope.
Jan 22 10:31:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.090983432 +0000 UTC m=+0.027217552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.19347882 +0000 UTC m=+0.129712940 container init 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.201026883 +0000 UTC m=+0.137260993 container start 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.205485512 +0000 UTC m=+0.141719642 container attach 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:31:17 np0005592157 objective_lederberg[356959]: 167 167
Jan 22 10:31:17 np0005592157 systemd[1]: libpod-225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625.scope: Deactivated successfully.
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.207417619 +0000 UTC m=+0.143651719 container died 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6824c4206a17895cba8ae75657cfa3f6583bb2e1f19a15a3eba68da31871c63c-merged.mount: Deactivated successfully.
Jan 22 10:31:17 np0005592157 podman[356943]: 2026-01-22 15:31:17.258167121 +0000 UTC m=+0.194401221 container remove 225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lederberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:31:17 np0005592157 systemd[1]: libpod-conmon-225637a0e1f0cadc130ae045ff2cd6ea927e0e5c1a81c4280da1adbfb484b625.scope: Deactivated successfully.
Jan 22 10:31:17 np0005592157 podman[356985]: 2026-01-22 15:31:17.411077883 +0000 UTC m=+0.038580147 container create c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:31:17 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:17 np0005592157 systemd[1]: Started libpod-conmon-c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4.scope.
Jan 22 10:31:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de8fb8dba474475b553e5ed63bfe75447926decea0435b8dd018d485aa6bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de8fb8dba474475b553e5ed63bfe75447926decea0435b8dd018d485aa6bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de8fb8dba474475b553e5ed63bfe75447926decea0435b8dd018d485aa6bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de8fb8dba474475b553e5ed63bfe75447926decea0435b8dd018d485aa6bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:17 np0005592157 podman[356985]: 2026-01-22 15:31:17.395416283 +0000 UTC m=+0.022918557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:17 np0005592157 podman[356985]: 2026-01-22 15:31:17.508085838 +0000 UTC m=+0.135588112 container init c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 10:31:17 np0005592157 podman[356985]: 2026-01-22 15:31:17.515350335 +0000 UTC m=+0.142852589 container start c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:31:17 np0005592157 podman[356985]: 2026-01-22 15:31:17.519120416 +0000 UTC m=+0.146622670 container attach c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:18.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:18 np0005592157 trusting_banach[357001]: {
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:    "0": [
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:        {
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "devices": [
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "/dev/loop3"
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            ],
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "lv_name": "ceph_lv0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "lv_size": "7511998464",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "name": "ceph_lv0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "tags": {
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.cluster_name": "ceph",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.crush_device_class": "",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.encrypted": "0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.osd_id": "0",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.type": "block",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:                "ceph.vdo": "0"
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            },
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "type": "block",
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:            "vg_name": "ceph_vg0"
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:        }
Jan 22 10:31:18 np0005592157 trusting_banach[357001]:    ]
Jan 22 10:31:18 np0005592157 trusting_banach[357001]: }
Jan 22 10:31:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:18 np0005592157 systemd[1]: libpod-c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4.scope: Deactivated successfully.
Jan 22 10:31:18 np0005592157 podman[356985]: 2026-01-22 15:31:18.285434831 +0000 UTC m=+0.912937105 container died c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:31:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:18 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:18 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-41de8fb8dba474475b553e5ed63bfe75447926decea0435b8dd018d485aa6bfe-merged.mount: Deactivated successfully.
Jan 22 10:31:19 np0005592157 podman[356985]: 2026-01-22 15:31:19.105856367 +0000 UTC m=+1.733358621 container remove c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:31:19 np0005592157 systemd[1]: libpod-conmon-c7168c7d486678ea085c73625f3c879ba190f9adc2b3cad975f57ebdc81f3de4.scope: Deactivated successfully.
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.726107159 +0000 UTC m=+0.105026051 container create 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.640861879 +0000 UTC m=+0.019780771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:19 np0005592157 systemd[1]: Started libpod-conmon-50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b.scope.
Jan 22 10:31:19 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:19 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.941337875 +0000 UTC m=+0.320256777 container init 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.947427185 +0000 UTC m=+0.326346057 container start 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:19 np0005592157 wizardly_brahmagupta[357185]: 167 167
Jan 22 10:31:19 np0005592157 systemd[1]: libpod-50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b.scope: Deactivated successfully.
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.98881046 +0000 UTC m=+0.367729332 container attach 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:31:19 np0005592157 podman[357168]: 2026-01-22 15:31:19.990777548 +0000 UTC m=+0.369696420 container died 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 10:31:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a0b60ad2920fac48a7f0bad6ffc8874e0b5da404a82e47f9cbafdf82116280b7-merged.mount: Deactivated successfully.
Jan 22 10:31:20 np0005592157 podman[357168]: 2026-01-22 15:31:20.177628473 +0000 UTC m=+0.556547365 container remove 50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:31:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:20.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:20 np0005592157 systemd[1]: libpod-conmon-50bbc2c1502557c0846750daa049c618518a8c726fbdff996a22431b83a3b05b.scope: Deactivated successfully.
Jan 22 10:31:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:20.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:20 np0005592157 podman[357211]: 2026-01-22 15:31:20.352725607 +0000 UTC m=+0.043901998 container create 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:31:20 np0005592157 systemd[1]: Started libpod-conmon-1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39.scope.
Jan 22 10:31:20 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:31:20 np0005592157 podman[357211]: 2026-01-22 15:31:20.333486751 +0000 UTC m=+0.024663142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:31:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad36b9b240811397b7c3498c49032107576ba100f677dce7f6e4f0f0f907cb70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad36b9b240811397b7c3498c49032107576ba100f677dce7f6e4f0f0f907cb70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad36b9b240811397b7c3498c49032107576ba100f677dce7f6e4f0f0f907cb70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:20 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad36b9b240811397b7c3498c49032107576ba100f677dce7f6e4f0f0f907cb70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:31:20 np0005592157 podman[357211]: 2026-01-22 15:31:20.449755148 +0000 UTC m=+0.140931539 container init 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:20 np0005592157 podman[357211]: 2026-01-22 15:31:20.459572811 +0000 UTC m=+0.150749182 container start 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:31:20 np0005592157 podman[357211]: 2026-01-22 15:31:20.46357339 +0000 UTC m=+0.154749781 container attach 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:31:20 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592157 serene_cohen[357228]: {
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:        "osd_id": 0,
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:        "type": "bluestore"
Jan 22 10:31:21 np0005592157 serene_cohen[357228]:    }
Jan 22 10:31:21 np0005592157 serene_cohen[357228]: }
Jan 22 10:31:21 np0005592157 systemd[1]: libpod-1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39.scope: Deactivated successfully.
Jan 22 10:31:21 np0005592157 podman[357211]: 2026-01-22 15:31:21.351312292 +0000 UTC m=+1.042488663 container died 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:31:21 np0005592157 systemd[1]: var-lib-containers-storage-overlay-ad36b9b240811397b7c3498c49032107576ba100f677dce7f6e4f0f0f907cb70-merged.mount: Deactivated successfully.
Jan 22 10:31:21 np0005592157 podman[357211]: 2026-01-22 15:31:21.548150724 +0000 UTC m=+1.239327095 container remove 1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:31:21 np0005592157 systemd[1]: libpod-conmon-1b761c1bcf1108c856d7296a07bacb16886133e9605fb9eb7cc8dc18110fad39.scope: Deactivated successfully.
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1ec37204-0fe5-4eab-8db0-e2a705e685e2 does not exist
Jan 22 10:31:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5fabaef3-87c9-44a9-aa59-7ab1655e7e13 does not exist
Jan 22 10:31:21 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 00e03dee-84fc-4c8a-a4b0-b7f65e709db2 does not exist
Jan 22 10:31:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:22.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:22.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:23 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:24 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:25 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:26 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:27 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:28 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:29 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:29 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:30 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:31 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:32 np0005592157 ceph-mon[74359]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:33 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:33 np0005592157 ceph-mon[74359]: Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:34.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:34 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:35 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:36 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:37 np0005592157 podman[357371]: 2026-01-22 15:31:37.327697166 +0000 UTC m=+0.062982490 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 10:31:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #237. Immutable memtables: 0.
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.415621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 237
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898415686, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2517, "num_deletes": 543, "total_data_size": 3433267, "memory_usage": 3498120, "flush_reason": "Manual Compaction"}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #238: started
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898432348, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 238, "file_size": 2094155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 104532, "largest_seqno": 107048, "table_properties": {"data_size": 2085806, "index_size": 3950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 31336, "raw_average_key_size": 23, "raw_value_size": 2063839, "raw_average_value_size": 1570, "num_data_blocks": 166, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095727, "oldest_key_time": 1769095727, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 238, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 16844 microseconds, and 5336 cpu microseconds.
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.432466) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #238: 2094155 bytes OK
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.432488) [db/memtable_list.cc:519] [default] Level-0 commit table #238 started
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.434422) [db/memtable_list.cc:722] [default] Level-0 commit table #238: memtable #1 done
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.434458) EVENT_LOG_v1 {"time_micros": 1769095898434431, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.434477) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 3421513, prev total WAL file size 3429780, number of live WAL files 2.
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000234.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.435551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323538' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [238(2045KB)], [236(11MB)]
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898435598, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [238], "files_L6": [236], "score": -1, "input_data_size": 14152775, "oldest_snapshot_seqno": -1}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #239: 14481 keys, 11396288 bytes, temperature: kUnknown
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898513044, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 239, "file_size": 11396288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11318911, "index_size": 40080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 396927, "raw_average_key_size": 27, "raw_value_size": 11074481, "raw_average_value_size": 764, "num_data_blocks": 1444, "num_entries": 14481, "num_filter_entries": 14481, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.513413) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 11396288 bytes
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.515074) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 15504, records dropped: 1023 output_compression: NoCompression
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.515091) EVENT_LOG_v1 {"time_micros": 1769095898515083, "job": 150, "event": "compaction_finished", "compaction_time_micros": 77527, "compaction_time_cpu_micros": 34623, "output_level": 6, "num_output_files": 1, "total_output_size": 11396288, "num_input_records": 15504, "num_output_records": 14481, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000238.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898515471, "job": 150, "event": "table_file_deletion", "file_number": 238}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898517474, "job": 150, "event": "table_file_deletion", "file_number": 236}
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.435433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.517561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.517570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.517573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.517577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:31:38.517581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:39 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:39 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:40.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:41 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:41 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:42.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:42 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:42 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:43 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:44.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:31:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:31:44 np0005592157 podman[357394]: 2026-01-22 15:31:44.402012298 +0000 UTC m=+0.123235021 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:44 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:44 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:46 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:46.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:46.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:31:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:31:47 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:31:47
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups']
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:31:47.671 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:31:47.672 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:31:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:31:47.672 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:31:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:48.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:48 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:49 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:49 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:50.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:51 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:52.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:52 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:54.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:54 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:55 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:55 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:31:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:31:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:56 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:31:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 22K writes, 107K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 22K writes, 22K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1850 writes, 9822 keys, 1850 commit groups, 1.0 writes per commit group, ingest: 11.02 MB, 0.02 MB/s#012Interval WAL: 1850 writes, 1850 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.4      2.23              0.52        75    0.030       0      0       0.0       0.0#012  L6      1/0   10.87 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    107.6     93.2      7.82              2.64        74    0.106    803K    44K       0.0       0.0#012 Sum      1/0   10.87 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.9     83.7     84.8     10.05              3.15       149    0.067    803K    44K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     56.0     55.6      1.62              0.27        14    0.115    105K   5935       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    107.6     93.2      7.82              2.64        74    0.106    803K    44K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.5      2.23              0.52        74    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.121, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.83 GB write, 0.12 MB/s write, 0.82 GB read, 0.12 MB/s read, 10.0 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 85.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000491 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4524,81.08 MB,26.6719%) FilterBlock(150,2.19 MB,0.718804%) IndexBlock(150,2.68 MB,0.880261%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:31:57 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:31:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:31:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:58 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:59 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:00.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:00.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:00 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:02 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:02.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:02.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:03 np0005592157 ceph-mon[74359]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:04 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:04 np0005592157 ceph-mon[74359]: Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:04 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:32:05 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:06.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:07 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:08.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:08 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592157 podman[357484]: 2026-01-22 15:32:08.315171109 +0000 UTC m=+0.050396668 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:32:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:08.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:09 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:09 np0005592157 ceph-mon[74359]: Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:10 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:32:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:32:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:12 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:14.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:14 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592157 ceph-mon[74359]: Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:15 np0005592157 podman[357556]: 2026-01-22 15:32:15.337248819 +0000 UTC m=+0.078006442 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 10:32:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:16 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:16.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:17 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:18.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:18 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:20 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592157 ceph-mon[74359]: Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:22.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:22 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bc0c18da-e12c-490b-9cd1-c372a44d1a97 does not exist
Jan 22 10:32:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ff1aea24-02db-4ed5-baac-a8f1b939cee1 does not exist
Jan 22 10:32:23 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 10db4ba1-b414-48c6-9323-c53c67891cf2 does not exist
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:32:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:23.895 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:32:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:23.896 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:32:23 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:23.897 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:32:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:32:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.457678934 +0000 UTC m=+0.064874067 container create 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:32:24 np0005592157 systemd[1]: Started libpod-conmon-0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb.scope.
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.41470663 +0000 UTC m=+0.021901763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:24 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.763168485 +0000 UTC m=+0.370363608 container init 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.772033464 +0000 UTC m=+0.379228577 container start 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 10:32:24 np0005592157 brave_jones[357874]: 167 167
Jan 22 10:32:24 np0005592157 systemd[1]: libpod-0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb.scope: Deactivated successfully.
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.855057889 +0000 UTC m=+0.462253032 container attach 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.856946336 +0000 UTC m=+0.464141469 container died 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 10:32:24 np0005592157 systemd[1]: var-lib-containers-storage-overlay-54ef72a1264183fcbf83bd15ed0863f5cbbb1fe1fd584bc3b4960130410015de-merged.mount: Deactivated successfully.
Jan 22 10:32:24 np0005592157 podman[357857]: 2026-01-22 15:32:24.906747238 +0000 UTC m=+0.513942351 container remove 0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:32:24 np0005592157 systemd[1]: libpod-conmon-0f643bcaea97baef3a56998055652aaace0db1b11fd44ec51ce1b68c852a33bb.scope: Deactivated successfully.
Jan 22 10:32:25 np0005592157 podman[357898]: 2026-01-22 15:32:25.038408577 +0000 UTC m=+0.022717983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:25 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:25 np0005592157 ceph-mon[74359]: Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:25 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:32:25 np0005592157 podman[357898]: 2026-01-22 15:32:25.34882173 +0000 UTC m=+0.333131126 container create 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 10:32:25 np0005592157 systemd[1]: Started libpod-conmon-9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41.scope.
Jan 22 10:32:25 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:25 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:25 np0005592157 podman[357898]: 2026-01-22 15:32:25.681233018 +0000 UTC m=+0.665542424 container init 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:32:25 np0005592157 podman[357898]: 2026-01-22 15:32:25.688287042 +0000 UTC m=+0.672596428 container start 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:32:25 np0005592157 podman[357898]: 2026-01-22 15:32:25.832784359 +0000 UTC m=+0.817093775 container attach 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:32:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:26 np0005592157 bold_merkle[357916]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:32:26 np0005592157 bold_merkle[357916]: --> relative data size: 1.0
Jan 22 10:32:26 np0005592157 bold_merkle[357916]: --> All data devices are unavailable
Jan 22 10:32:26 np0005592157 systemd[1]: libpod-9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41.scope: Deactivated successfully.
Jan 22 10:32:26 np0005592157 podman[357898]: 2026-01-22 15:32:26.494309392 +0000 UTC m=+1.478618798 container died 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:32:27 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:28.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:28 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b22477951eaf43ba0c5ae56af020ac2988d6151528daf7e9dd80096fa350ddd4-merged.mount: Deactivated successfully.
Jan 22 10:32:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:29 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:29 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:30.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:30.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:31 np0005592157 podman[357898]: 2026-01-22 15:32:31.117035476 +0000 UTC m=+6.101344892 container remove 9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:32:31 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592157 systemd[1]: libpod-conmon-9e2a9d9936d04645bd92abe23281aa885858dc7987589ad72a37190942dc5d41.scope: Deactivated successfully.
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.769411392 +0000 UTC m=+0.038688258 container create d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:32:31 np0005592157 systemd[1]: Started libpod-conmon-d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b.scope.
Jan 22 10:32:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.846181242 +0000 UTC m=+0.115458128 container init d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.75155249 +0000 UTC m=+0.020829376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.854677883 +0000 UTC m=+0.123954769 container start d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:32:31 np0005592157 brave_feynman[358106]: 167 167
Jan 22 10:32:31 np0005592157 systemd[1]: libpod-d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b.scope: Deactivated successfully.
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.859394389 +0000 UTC m=+0.128671245 container attach d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.860026015 +0000 UTC m=+0.129302901 container died d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:32:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6b79dda1c13fdcc5a2d56db047479479bfc32ac34dcc6ac23408d586259359c9-merged.mount: Deactivated successfully.
Jan 22 10:32:31 np0005592157 podman[358089]: 2026-01-22 15:32:31.895900473 +0000 UTC m=+0.165177339 container remove d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:32:31 np0005592157 systemd[1]: libpod-conmon-d5fa3ecb0b4ce503f087ffaa9e9b386878cee745d82fdadc453e536c7e9ab16b.scope: Deactivated successfully.
Jan 22 10:32:31 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.039327013 +0000 UTC m=+0.034173277 container create 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:32:32 np0005592157 systemd[1]: Started libpod-conmon-164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae.scope.
Jan 22 10:32:32 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d84b3f8ac7b80f3947b5cacd92ad1dcf2f7df37555ff93bcf70dd03c076404/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d84b3f8ac7b80f3947b5cacd92ad1dcf2f7df37555ff93bcf70dd03c076404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d84b3f8ac7b80f3947b5cacd92ad1dcf2f7df37555ff93bcf70dd03c076404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:32 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d84b3f8ac7b80f3947b5cacd92ad1dcf2f7df37555ff93bcf70dd03c076404/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.11030873 +0000 UTC m=+0.105155014 container init 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.121263551 +0000 UTC m=+0.116109815 container start 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.024839364 +0000 UTC m=+0.019685628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.124916611 +0000 UTC m=+0.119762875 container attach 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:32:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:32.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:32 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]: {
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:    "0": [
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:        {
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "devices": [
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "/dev/loop3"
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            ],
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "lv_name": "ceph_lv0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "lv_size": "7511998464",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "name": "ceph_lv0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "tags": {
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.cluster_name": "ceph",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.crush_device_class": "",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.encrypted": "0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.osd_id": "0",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.type": "block",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:                "ceph.vdo": "0"
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            },
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "type": "block",
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:            "vg_name": "ceph_vg0"
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:        }
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]:    ]
Jan 22 10:32:32 np0005592157 sweet_albattani[358146]: }
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.869556992 +0000 UTC m=+0.864403346 container died 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:32:32 np0005592157 systemd[1]: libpod-164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae.scope: Deactivated successfully.
Jan 22 10:32:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-b4d84b3f8ac7b80f3947b5cacd92ad1dcf2f7df37555ff93bcf70dd03c076404-merged.mount: Deactivated successfully.
Jan 22 10:32:32 np0005592157 podman[358130]: 2026-01-22 15:32:32.939044412 +0000 UTC m=+0.933890676 container remove 164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:32:32 np0005592157 systemd[1]: libpod-conmon-164d63e0bdc7f7f1edee2e9779b554fb5f9d58d97669248f50cd705f1a8a39ae.scope: Deactivated successfully.
Jan 22 10:32:33 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:33 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.504831615 +0000 UTC m=+0.043115828 container create aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:32:33 np0005592157 systemd[1]: Started libpod-conmon-aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5.scope.
Jan 22 10:32:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.487427224 +0000 UTC m=+0.025711457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.587104771 +0000 UTC m=+0.125389014 container init aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.592240899 +0000 UTC m=+0.130525112 container start aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.595338475 +0000 UTC m=+0.133622688 container attach aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:32:33 np0005592157 youthful_lumiere[358377]: 167 167
Jan 22 10:32:33 np0005592157 systemd[1]: libpod-aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5.scope: Deactivated successfully.
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.596398551 +0000 UTC m=+0.134682764 container died aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:32:33 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1538ecf050bfcf6be02e84eab036b78622d7580791e585da776f04922e825e7f-merged.mount: Deactivated successfully.
Jan 22 10:32:33 np0005592157 podman[358360]: 2026-01-22 15:32:33.633826998 +0000 UTC m=+0.172111221 container remove aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:32:33 np0005592157 systemd[1]: libpod-conmon-aa4e7fbcb967fad0e7b751f44b663f96352f70a12352558a56d5c113d13129f5.scope: Deactivated successfully.
Jan 22 10:32:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:33 np0005592157 podman[358401]: 2026-01-22 15:32:33.822711243 +0000 UTC m=+0.046704317 container create 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:32:33 np0005592157 systemd[1]: Started libpod-conmon-5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0.scope.
Jan 22 10:32:33 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:32:33 np0005592157 podman[358401]: 2026-01-22 15:32:33.798383321 +0000 UTC m=+0.022376405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:32:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dab4125047b467b26a475b4161c31bcd4eff333571de609c1e398e9507e78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dab4125047b467b26a475b4161c31bcd4eff333571de609c1e398e9507e78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dab4125047b467b26a475b4161c31bcd4eff333571de609c1e398e9507e78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:33 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dab4125047b467b26a475b4161c31bcd4eff333571de609c1e398e9507e78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:32:33 np0005592157 podman[358401]: 2026-01-22 15:32:33.906106697 +0000 UTC m=+0.130099741 container init 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:32:33 np0005592157 podman[358401]: 2026-01-22 15:32:33.914049864 +0000 UTC m=+0.138042898 container start 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 10:32:33 np0005592157 podman[358401]: 2026-01-22 15:32:33.91795012 +0000 UTC m=+0.141943154 container attach 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:32:33 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:34.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]: {
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:        "osd_id": 0,
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:        "type": "bluestore"
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]:    }
Jan 22 10:32:34 np0005592157 laughing_darwin[358418]: }
Jan 22 10:32:34 np0005592157 systemd[1]: libpod-5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0.scope: Deactivated successfully.
Jan 22 10:32:34 np0005592157 podman[358401]: 2026-01-22 15:32:34.716030123 +0000 UTC m=+0.940023157 container died 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:32:34 np0005592157 systemd[1]: var-lib-containers-storage-overlay-34dab4125047b467b26a475b4161c31bcd4eff333571de609c1e398e9507e78a-merged.mount: Deactivated successfully.
Jan 22 10:32:34 np0005592157 podman[358401]: 2026-01-22 15:32:34.774980292 +0000 UTC m=+0.998973326 container remove 5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 10:32:34 np0005592157 systemd[1]: libpod-conmon-5ac269fb93566b55f5977743e7472532072e84539ceda00542774c6d676fb3f0.scope: Deactivated successfully.
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:32:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6d082223-9be8-4f51-919b-d9176419b407 does not exist
Jan 22 10:32:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fc4012ed-a71d-4ca6-a452-ebae21324580 does not exist
Jan 22 10:32:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ef513eb2-d80a-4634-984d-1296d70e9b43 does not exist
Jan 22 10:32:35 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:35 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:35 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:36 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:37 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:37 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:38.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:38 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:39 np0005592157 podman[358504]: 2026-01-22 15:32:39.31663851 +0000 UTC m=+0.057155936 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:32:39 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:39 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:40.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:41 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:41 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:32:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:32:42 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:43 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:43 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:44.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:44 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:45 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:45 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:46.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:46 np0005592157 podman[358529]: 2026-01-22 15:32:46.366465386 +0000 UTC m=+0.105858701 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:32:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:32:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:32:47 np0005592157 ceph-mon[74359]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:32:47
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['images', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.log']
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:47.672 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:47.673 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:32:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:32:47.673 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:32:47 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:48.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:48 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:48 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:49 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:32:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:32:50 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:50 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:51 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:52 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:32:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:32:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:53 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:53 np0005592157 ceph-mon[74359]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:53 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:32:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:54.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:32:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:54 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592157 ceph-mon[74359]: Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:55 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:56 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:56.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:57 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:57 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:32:58 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:58.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:32:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:59 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:59 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:59 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:00.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:00 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:02.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:02.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:03 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:03 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:04 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:33:05 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:05 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:06.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:06.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:07 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:08 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:09 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:09 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:09 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:09 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:10 np0005592157 podman[358618]: 2026-01-22 15:33:10.362056587 +0000 UTC m=+0.084377810 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:33:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:11 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:12 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:13 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:13 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:14 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:14 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:15 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:15 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:16 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:17 np0005592157 podman[358691]: 2026-01-22 15:33:17.341817118 +0000 UTC m=+0.083782775 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:33:17 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:17 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:18.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:18 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:18 np0005592157 ceph-mon[74359]: Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:19 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:20 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:21 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:21 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:22.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:23 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:24 np0005592157 ceph-mon[74359]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:24 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:25 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:26 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:26 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:27 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:28 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:28 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:29 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:30 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:31 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:32 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:32 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:34 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:34 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:35 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:36 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c4c2f968-c810-4e30-a329-01d9e7d70ed7 does not exist
Jan 22 10:33:36 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0666da4b-f9ee-48bd-9b3d-54632c28285d does not exist
Jan 22 10:33:36 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 85bb81f4-5b15-43e2-96ed-840bf59825bf does not exist
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:33:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.852423463 +0000 UTC m=+0.045129218 container create 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:33:36 np0005592157 systemd[1]: Started libpod-conmon-0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb.scope.
Jan 22 10:33:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.832230763 +0000 UTC m=+0.024936548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.930606298 +0000 UTC m=+0.123312063 container init 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.937633812 +0000 UTC m=+0.130339567 container start 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.940623596 +0000 UTC m=+0.133329351 container attach 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:33:36 np0005592157 tender_pasteur[359071]: 167 167
Jan 22 10:33:36 np0005592157 systemd[1]: libpod-0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb.scope: Deactivated successfully.
Jan 22 10:33:36 np0005592157 conmon[359071]: conmon 0f52cc09d8db19942912 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb.scope/container/memory.events
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.944349618 +0000 UTC m=+0.137055393 container died 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:33:36 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c742c91fb3f4e045ac29910b13d0edb3fc3755a4c6c19d63e3dc7f90f775618a-merged.mount: Deactivated successfully.
Jan 22 10:33:36 np0005592157 podman[359055]: 2026-01-22 15:33:36.989375182 +0000 UTC m=+0.182080937 container remove 0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:33:36 np0005592157 systemd[1]: libpod-conmon-0f52cc09d8db19942912709d3f4ba9a386ced7aba9399dc4156a49c5fa3bc3bb.scope: Deactivated successfully.
Jan 22 10:33:37 np0005592157 podman[359096]: 2026-01-22 15:33:37.138473152 +0000 UTC m=+0.042496612 container create 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:33:37 np0005592157 systemd[1]: Started libpod-conmon-3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b.scope.
Jan 22 10:33:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:37 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:37 np0005592157 podman[359096]: 2026-01-22 15:33:37.119570995 +0000 UTC m=+0.023594515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:37 np0005592157 podman[359096]: 2026-01-22 15:33:37.215738895 +0000 UTC m=+0.119762385 container init 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:33:37 np0005592157 podman[359096]: 2026-01-22 15:33:37.223855006 +0000 UTC m=+0.127878486 container start 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 10:33:37 np0005592157 podman[359096]: 2026-01-22 15:33:37.227382493 +0000 UTC m=+0.131405993 container attach 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 10:33:37 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:37 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:33:37 np0005592157 musing_mendeleev[359113]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:33:37 np0005592157 musing_mendeleev[359113]: --> relative data size: 1.0
Jan 22 10:33:37 np0005592157 musing_mendeleev[359113]: --> All data devices are unavailable
Jan 22 10:33:38 np0005592157 systemd[1]: libpod-3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b.scope: Deactivated successfully.
Jan 22 10:33:38 np0005592157 podman[359096]: 2026-01-22 15:33:38.032742736 +0000 UTC m=+0.936766256 container died 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:33:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f062e82808a521faf123ca8af85af57e0652c811c2ccb6f342a8b4aa1aeba292-merged.mount: Deactivated successfully.
Jan 22 10:33:38 np0005592157 podman[359096]: 2026-01-22 15:33:38.113073135 +0000 UTC m=+1.017096605 container remove 3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 10:33:38 np0005592157 systemd[1]: libpod-conmon-3c70a8bfb57d346db8a4d8657e49f7556fadb221ec9c48beade55417f198962b.scope: Deactivated successfully.
Jan 22 10:33:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:38 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.815430578 +0000 UTC m=+0.056091289 container create 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:33:38 np0005592157 systemd[1]: Started libpod-conmon-8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191.scope.
Jan 22 10:33:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.79530341 +0000 UTC m=+0.035964201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.904994215 +0000 UTC m=+0.145654946 container init 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.913733192 +0000 UTC m=+0.154393903 container start 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.917148126 +0000 UTC m=+0.157808867 container attach 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:33:38 np0005592157 recursing_haslett[359299]: 167 167
Jan 22 10:33:38 np0005592157 systemd[1]: libpod-8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191.scope: Deactivated successfully.
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.918963751 +0000 UTC m=+0.159624452 container died 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 22 10:33:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-808748ab96b2fa4464084fa09dc9ff31ed1e5a2f17d4d93017873a601cfc626e-merged.mount: Deactivated successfully.
Jan 22 10:33:38 np0005592157 podman[359283]: 2026-01-22 15:33:38.955230199 +0000 UTC m=+0.195890910 container remove 8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:33:38 np0005592157 systemd[1]: libpod-conmon-8a6dde00f9fd00e3a58872afd538f74201071c1df130c5805bcb169b462c5191.scope: Deactivated successfully.
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.102836572 +0000 UTC m=+0.038116464 container create 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:33:39 np0005592157 systemd[1]: Started libpod-conmon-21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d.scope.
Jan 22 10:33:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c575ae41e817716fae55e139e65ecc11e611babf95acd1f78d8f2edafd89f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c575ae41e817716fae55e139e65ecc11e611babf95acd1f78d8f2edafd89f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c575ae41e817716fae55e139e65ecc11e611babf95acd1f78d8f2edafd89f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:39 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c575ae41e817716fae55e139e65ecc11e611babf95acd1f78d8f2edafd89f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.177417778 +0000 UTC m=+0.112697670 container init 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.086337924 +0000 UTC m=+0.021617836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.184058062 +0000 UTC m=+0.119337954 container start 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.194374338 +0000 UTC m=+0.129654230 container attach 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:33:39 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:39 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]: {
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:    "0": [
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:        {
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "devices": [
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "/dev/loop3"
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            ],
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "lv_name": "ceph_lv0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "lv_size": "7511998464",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "name": "ceph_lv0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "tags": {
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.cluster_name": "ceph",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.crush_device_class": "",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.encrypted": "0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.osd_id": "0",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.type": "block",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:                "ceph.vdo": "0"
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            },
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "type": "block",
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:            "vg_name": "ceph_vg0"
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:        }
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]:    ]
Jan 22 10:33:39 np0005592157 wizardly_chaum[359339]: }
Jan 22 10:33:39 np0005592157 systemd[1]: libpod-21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d.scope: Deactivated successfully.
Jan 22 10:33:39 np0005592157 podman[359323]: 2026-01-22 15:33:39.97911389 +0000 UTC m=+0.914393782 container died 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:33:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-51c575ae41e817716fae55e139e65ecc11e611babf95acd1f78d8f2edafd89f7-merged.mount: Deactivated successfully.
Jan 22 10:33:40 np0005592157 podman[359323]: 2026-01-22 15:33:40.032697706 +0000 UTC m=+0.967977598 container remove 21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:33:40 np0005592157 systemd[1]: libpod-conmon-21d776864518ba046bfb119557fd25473a8b7513a7e9da19abf132571069b32d.scope: Deactivated successfully.
Jan 22 10:33:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:40 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.614071255 +0000 UTC m=+0.042943854 container create e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 10:33:40 np0005592157 systemd[1]: Started libpod-conmon-e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f.scope.
Jan 22 10:33:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.665744934 +0000 UTC m=+0.094617553 container init e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.671588919 +0000 UTC m=+0.100461518 container start e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:33:40 np0005592157 ecstatic_vaughan[359518]: 167 167
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.67529236 +0000 UTC m=+0.104164979 container attach e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 10:33:40 np0005592157 systemd[1]: libpod-e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f.scope: Deactivated successfully.
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.677060834 +0000 UTC m=+0.105933433 container died e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.598326715 +0000 UTC m=+0.027199334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:40 np0005592157 systemd[1]: var-lib-containers-storage-overlay-050667be33bf588cfb393c8c01faa44f67a159bc93e5d034c2016e0efa57bf36-merged.mount: Deactivated successfully.
Jan 22 10:33:40 np0005592157 podman[359500]: 2026-01-22 15:33:40.712781938 +0000 UTC m=+0.141654537 container remove e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:33:40 np0005592157 systemd[1]: libpod-conmon-e6aaeb8d25209cd84214ae4e44fb4cd2676004b1d05935d4ab52d6c5cf224a7f.scope: Deactivated successfully.
Jan 22 10:33:40 np0005592157 podman[359514]: 2026-01-22 15:33:40.722486798 +0000 UTC m=+0.067052830 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:33:40 np0005592157 podman[359557]: 2026-01-22 15:33:40.882022157 +0000 UTC m=+0.038493664 container create 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:33:40 np0005592157 systemd[1]: Started libpod-conmon-5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda.scope.
Jan 22 10:33:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:33:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9077def4fc69ad1a57fcc0167773cb6a93f99d5943644837708047880c3ddca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9077def4fc69ad1a57fcc0167773cb6a93f99d5943644837708047880c3ddca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9077def4fc69ad1a57fcc0167773cb6a93f99d5943644837708047880c3ddca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9077def4fc69ad1a57fcc0167773cb6a93f99d5943644837708047880c3ddca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:33:40 np0005592157 podman[359557]: 2026-01-22 15:33:40.867077267 +0000 UTC m=+0.023548794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:33:40 np0005592157 podman[359557]: 2026-01-22 15:33:40.963625777 +0000 UTC m=+0.120097284 container init 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 10:33:40 np0005592157 podman[359557]: 2026-01-22 15:33:40.970126088 +0000 UTC m=+0.126597595 container start 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 22 10:33:40 np0005592157 podman[359557]: 2026-01-22 15:33:40.97384914 +0000 UTC m=+0.130320647 container attach 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:33:41 np0005592157 loving_cerf[359574]: {
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:        "osd_id": 0,
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:        "type": "bluestore"
Jan 22 10:33:41 np0005592157 loving_cerf[359574]:    }
Jan 22 10:33:41 np0005592157 loving_cerf[359574]: }
Jan 22 10:33:41 np0005592157 systemd[1]: libpod-5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda.scope: Deactivated successfully.
Jan 22 10:33:41 np0005592157 podman[359557]: 2026-01-22 15:33:41.850454207 +0000 UTC m=+1.006925714 container died 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:33:41 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a9077def4fc69ad1a57fcc0167773cb6a93f99d5943644837708047880c3ddca-merged.mount: Deactivated successfully.
Jan 22 10:33:42 np0005592157 podman[359557]: 2026-01-22 15:33:42.012341133 +0000 UTC m=+1.168812680 container remove 5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:33:42 np0005592157 systemd[1]: libpod-conmon-5545bb916c1c74febd7e5d392e4dcdf00c376511d213e68a9fc891cefde9abda.scope: Deactivated successfully.
Jan 22 10:33:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:33:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:33:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fb8a0880-511a-4958-a3d0-417121f5d8cc does not exist
Jan 22 10:33:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f272aa34-c1ba-4e63-9847-63be9b29280e does not exist
Jan 22 10:33:42 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fd348000-a477-4b54-8fe4-0ca3e9e42d93 does not exist
Jan 22 10:33:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:43 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:44 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:44 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:44 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:45 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:33:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:33:46 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:33:47
Jan 22 10:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.control']
Jan 22 10:33:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:33:47.675 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:33:47.680 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:33:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:33:47.681 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:33:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:48 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:48 np0005592157 podman[359663]: 2026-01-22 15:33:48.417973215 +0000 UTC m=+0.144848336 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:33:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:49 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:49 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:50.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:50 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:33:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:33:52 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #240. Immutable memtables: 0.
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.002031) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 151] Flushing memtable with next log file: 240
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034002649, "job": 151, "event": "flush_started", "num_memtables": 1, "num_entries": 2152, "num_deletes": 736, "total_data_size": 2496389, "memory_usage": 2547536, "flush_reason": "Manual Compaction"}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 151] Level-0 flush table #241: started
Jan 22 10:33:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034162588, "cf_name": "default", "job": 151, "event": "table_file_creation", "file_number": 241, "file_size": 2442903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 107049, "largest_seqno": 109200, "table_properties": {"data_size": 2434191, "index_size": 4181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 29850, "raw_average_key_size": 21, "raw_value_size": 2411596, "raw_average_value_size": 1777, "num_data_blocks": 178, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 736, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095898, "oldest_key_time": 1769095898, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 241, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 151] Flush lasted 160631 microseconds, and 5371 cpu microseconds.
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.162662) [db/flush_job.cc:967] [default] [JOB 151] Level-0 flush table #241: 2442903 bytes OK
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.162687) [db/memtable_list.cc:519] [default] Level-0 commit table #241 started
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.311609) [db/memtable_list.cc:722] [default] Level-0 commit table #241: memtable #1 done
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.311658) EVENT_LOG_v1 {"time_micros": 1769096034311647, "job": 151, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.311682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 151] Try to delete WAL files size 2485161, prev total WAL file size 2486449, number of live WAL files 2.
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000237.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.376984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035353338' seq:72057594037927935, type:22 .. '6C6F676D0035373931' seq:0, type:0; will stop at (end)
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 152] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 151 Base level 0, inputs: [241(2385KB)], [239(10MB)]
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034377072, "job": 152, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [241], "files_L6": [239], "score": -1, "input_data_size": 13839191, "oldest_snapshot_seqno": -1}
Jan 22 10:33:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 152] Generated table #242: 14349 keys, 11969627 bytes, temperature: kUnknown
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034636075, "cf_name": "default", "job": 152, "event": "table_file_creation", "file_number": 242, "file_size": 11969627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11891752, "index_size": 40898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 395120, "raw_average_key_size": 27, "raw_value_size": 11648373, "raw_average_value_size": 811, "num_data_blocks": 1472, "num_entries": 14349, "num_filter_entries": 14349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 242, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.636870) [db/compaction/compaction_job.cc:1663] [default] [JOB 152] Compacted 1@0 + 1@6 files to L6 => 11969627 bytes
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.721263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.4 rd, 46.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 15838, records dropped: 1489 output_compression: NoCompression
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.721312) EVENT_LOG_v1 {"time_micros": 1769096034721293, "job": 152, "event": "compaction_finished", "compaction_time_micros": 259088, "compaction_time_cpu_micros": 58987, "output_level": 6, "num_output_files": 1, "total_output_size": 11969627, "num_input_records": 15838, "num_output_records": 14349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000241.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034722470, "job": 152, "event": "table_file_deletion", "file_number": 241}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000239.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034726456, "job": 152, "event": "table_file_deletion", "file_number": 239}
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.376826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.726503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.726509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.726512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.726515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:33:54.726519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:55 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:55 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:57 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:33:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:33:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:33:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:33:58 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:59 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:59 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:01 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:02 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:03 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:03 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:05 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:05 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:34:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:34:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:07 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:07 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:08.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:08 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:08 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:08 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:08.838 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:34:08 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:08.839 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:34:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:10 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:10 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:10.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:11 np0005592157 podman[359751]: 2026-01-22 15:34:11.307221773 +0000 UTC m=+0.045675151 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 10:34:12 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:12.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:13 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 10:34:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:34:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:14.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:34:14 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:14 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:14 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:14.841 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:34:15 np0005592157 ceph-mon[74359]: 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:15 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 10:34:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:16.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:17 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 10:34:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:18.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 10:34:18 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 22 10:34:19 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:19 np0005592157 ceph-mon[74359]: Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:19 np0005592157 podman[359822]: 2026-01-22 15:34:19.361049478 +0000 UTC m=+0.091710301 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:34:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 8 op/s
Jan 22 10:34:20 np0005592157 ceph-mon[74359]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 10:34:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:34:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:20.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:34:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:21 np0005592157 ceph-mon[74359]: 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 10:34:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 30 op/s
Jan 22 10:34:22 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:23 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:23 np0005592157 ceph-mon[74359]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:23 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:23 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 22 10:34:24 np0005592157 ceph-mon[74359]: Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:24 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:25 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 10:34:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:34:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:34:26 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:27 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 10:34:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:28 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:28 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:28 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:29 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:29 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 22 10:34:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:34:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 18K writes, 55K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 18K writes, 6227 syncs, 2.95 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 647 writes, 1071 keys, 647 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 647 writes, 318 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:34:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 10:34:32 np0005592157 ceph-mon[74359]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 10:34:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:32.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:33 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:33 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 820 KiB/s wr, 27 op/s
Jan 22 10:34:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:34 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592157 ceph-mon[74359]: Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:35 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:35 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 682 B/s wr, 18 op/s
Jan 22 10:34:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:36.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:37 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:34:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:38.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:38.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:38 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:38 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:40 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:40 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:34:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:41 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 10:34:42 np0005592157 podman[359913]: 2026-01-22 15:34:42.310080911 +0000 UTC m=+0.051418735 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:34:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:42.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:42 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:43 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 10:34:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:34:43 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 597 B/s wr, 8 op/s
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:34:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev cba0f6ac-41f5-493a-a5ae-532d2fe411ea does not exist
Jan 22 10:34:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3b16006d-2d32-4c4c-9729-4e143f4ec147 does not exist
Jan 22 10:34:44 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 2f4d1e07-eeb2-41e1-9c33-03950e44774d does not exist
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:34:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.389201501 +0000 UTC m=+0.042813222 container create 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 10:34:45 np0005592157 systemd[1]: Started libpod-conmon-14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897.scope.
Jan 22 10:34:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.366352215 +0000 UTC m=+0.019963936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.466411245 +0000 UTC m=+0.120022956 container init 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.473520751 +0000 UTC m=+0.127132462 container start 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 10:34:45 np0005592157 gracious_solomon[360336]: 167 167
Jan 22 10:34:45 np0005592157 systemd[1]: libpod-14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897.scope: Deactivated successfully.
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.479550041 +0000 UTC m=+0.133161772 container attach 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.479873379 +0000 UTC m=+0.133485080 container died 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:34:45 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1bb3bf18cdfdf90634f4833d32387501f6545fb70a3eaac75ba8b0a85215b593-merged.mount: Deactivated successfully.
Jan 22 10:34:45 np0005592157 podman[360320]: 2026-01-22 15:34:45.528157655 +0000 UTC m=+0.181769396 container remove 14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:34:45 np0005592157 systemd[1]: libpod-conmon-14c35baff3046cd17bfd3657b18594c04fc3b19ff4be951cb866eedb451e7897.scope: Deactivated successfully.
Jan 22 10:34:45 np0005592157 podman[360361]: 2026-01-22 15:34:45.72888951 +0000 UTC m=+0.045349425 container create c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:34:45 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:45 np0005592157 systemd[1]: Started libpod-conmon-c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f.scope.
Jan 22 10:34:45 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:45 np0005592157 podman[360361]: 2026-01-22 15:34:45.710664628 +0000 UTC m=+0.027124553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:45 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:45 np0005592157 podman[360361]: 2026-01-22 15:34:45.826483629 +0000 UTC m=+0.142943594 container init c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:45 np0005592157 podman[360361]: 2026-01-22 15:34:45.840377323 +0000 UTC m=+0.156837228 container start c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:34:45 np0005592157 podman[360361]: 2026-01-22 15:34:45.844467584 +0000 UTC m=+0.160927579 container attach c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 10:34:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:46 np0005592157 recursing_wing[360377]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:34:46 np0005592157 recursing_wing[360377]: --> relative data size: 1.0
Jan 22 10:34:46 np0005592157 recursing_wing[360377]: --> All data devices are unavailable
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:34:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:34:46 np0005592157 systemd[1]: libpod-c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f.scope: Deactivated successfully.
Jan 22 10:34:46 np0005592157 podman[360361]: 2026-01-22 15:34:46.632487344 +0000 UTC m=+0.948947249 container died c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:34:46 np0005592157 systemd[1]: var-lib-containers-storage-overlay-cb97c7cd48c30ec4335bd03d2a8be19ef3348c5dc4c3906c978232ed6ddc95cf-merged.mount: Deactivated successfully.
Jan 22 10:34:46 np0005592157 podman[360361]: 2026-01-22 15:34:46.701367991 +0000 UTC m=+1.017827936 container remove c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:34:46 np0005592157 systemd[1]: libpod-conmon-c06b1354b3d7a664161657b1241dd43e01c825c009c82305bdd07de16fb5432f.scope: Deactivated successfully.
Jan 22 10:34:46 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.441892223 +0000 UTC m=+0.051465426 container create 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:34:47 np0005592157 systemd[1]: Started libpod-conmon-755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f.scope.
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.418815311 +0000 UTC m=+0.028388594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.554082184 +0000 UTC m=+0.163655477 container init 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.560235986 +0000 UTC m=+0.169809229 container start 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.564865291 +0000 UTC m=+0.174438534 container attach 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:34:47 np0005592157 friendly_chandrasekhar[360563]: 167 167
Jan 22 10:34:47 np0005592157 systemd[1]: libpod-755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f.scope: Deactivated successfully.
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.567755923 +0000 UTC m=+0.177329166 container died 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:34:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4cfb95b1a8e297b1646b0abee56215c953f3fabb16602230d7b4c3efc4904f64-merged.mount: Deactivated successfully.
Jan 22 10:34:47 np0005592157 podman[360548]: 2026-01-22 15:34:47.623879423 +0000 UTC m=+0.233452636 container remove 755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chandrasekhar, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:34:47 np0005592157 systemd[1]: libpod-conmon-755df2ad362783e1a4473709b59c04ceabb2a20391c0906b5d2410522bc1e65f.scope: Deactivated successfully.
Jan 22 10:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:34:47
Jan 22 10:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes']
Jan 22 10:34:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:47.674 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:47.676 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:34:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:34:47.676 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #243. Immutable memtables: 0.
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.812389) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 153] Flushing memtable with next log file: 243
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087812505, "job": 153, "event": "flush_started", "num_memtables": 1, "num_entries": 957, "num_deletes": 346, "total_data_size": 1093519, "memory_usage": 1116048, "flush_reason": "Manual Compaction"}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 153] Level-0 flush table #244: started
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087826173, "cf_name": "default", "job": 153, "event": "table_file_creation", "file_number": 244, "file_size": 1076604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 109201, "largest_seqno": 110157, "table_properties": {"data_size": 1071940, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14126, "raw_average_key_size": 22, "raw_value_size": 1061306, "raw_average_value_size": 1692, "num_data_blocks": 84, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 346, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096034, "oldest_key_time": 1769096034, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 244, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 153] Flush lasted 14127 microseconds, and 7706 cpu microseconds.
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.826507) [db/flush_job.cc:967] [default] [JOB 153] Level-0 flush table #244: 1076604 bytes OK
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.826551) [db/memtable_list.cc:519] [default] Level-0 commit table #244 started
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828782) [db/memtable_list.cc:722] [default] Level-0 commit table #244: memtable #1 done
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828822) EVENT_LOG_v1 {"time_micros": 1769096087828811, "job": 153, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 153] Try to delete WAL files size 1088394, prev total WAL file size 1088394, number of live WAL files 2.
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000240.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.829691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 154] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 153 Base level 0, inputs: [244(1051KB)], [242(11MB)]
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087829754, "job": 154, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [244], "files_L6": [242], "score": -1, "input_data_size": 13046231, "oldest_snapshot_seqno": -1}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 154] Generated table #245: 14265 keys, 11330687 bytes, temperature: kUnknown
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087934212, "cf_name": "default", "job": 154, "event": "table_file_creation", "file_number": 245, "file_size": 11330687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253662, "index_size": 40240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 393556, "raw_average_key_size": 27, "raw_value_size": 11011909, "raw_average_value_size": 771, "num_data_blocks": 1445, "num_entries": 14265, "num_filter_entries": 14265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 245, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592157 podman[360589]: 2026-01-22 15:34:47.935083776 +0000 UTC m=+0.108553781 container create c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.934700) [db/compaction/compaction_job.cc:1663] [default] [JOB 154] Compacted 1@0 + 1@6 files to L6 => 11330687 bytes
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.936058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.6 rd, 108.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(22.6) write-amplify(10.5) OK, records in: 14976, records dropped: 711 output_compression: NoCompression
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.936080) EVENT_LOG_v1 {"time_micros": 1769096087936070, "job": 154, "event": "compaction_finished", "compaction_time_micros": 104701, "compaction_time_cpu_micros": 56310, "output_level": 6, "num_output_files": 1, "total_output_size": 11330687, "num_input_records": 14976, "num_output_records": 14265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000244.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087936414, "job": 154, "event": "table_file_deletion", "file_number": 244}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000242.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087938809, "job": 154, "event": "table_file_deletion", "file_number": 242}
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.829633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.938885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.938893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.938895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.938897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:34:47.938899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592157 podman[360589]: 2026-01-22 15:34:47.868195179 +0000 UTC m=+0.041665224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:47 np0005592157 systemd[1]: Started libpod-conmon-c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360.scope.
Jan 22 10:34:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a090ee6320975be79322584eb7c1ca7c8762abed786a2de4255357239adb3147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a090ee6320975be79322584eb7c1ca7c8762abed786a2de4255357239adb3147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a090ee6320975be79322584eb7c1ca7c8762abed786a2de4255357239adb3147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a090ee6320975be79322584eb7c1ca7c8762abed786a2de4255357239adb3147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:48 np0005592157 podman[360589]: 2026-01-22 15:34:48.021125569 +0000 UTC m=+0.194595604 container init c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:34:48 np0005592157 podman[360589]: 2026-01-22 15:34:48.02842907 +0000 UTC m=+0.201899075 container start c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:34:48 np0005592157 podman[360589]: 2026-01-22 15:34:48.03205714 +0000 UTC m=+0.205527175 container attach c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:34:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:48.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]: {
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:    "0": [
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:        {
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "devices": [
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "/dev/loop3"
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            ],
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "lv_name": "ceph_lv0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "lv_size": "7511998464",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "name": "ceph_lv0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "tags": {
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.cluster_name": "ceph",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.crush_device_class": "",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.encrypted": "0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.osd_id": "0",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.type": "block",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:                "ceph.vdo": "0"
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            },
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "type": "block",
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:            "vg_name": "ceph_vg0"
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:        }
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]:    ]
Jan 22 10:34:48 np0005592157 wonderful_ramanujan[360605]: }
Jan 22 10:34:48 np0005592157 systemd[1]: libpod-c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360.scope: Deactivated successfully.
Jan 22 10:34:48 np0005592157 podman[360589]: 2026-01-22 15:34:48.812254465 +0000 UTC m=+0.985724460 container died c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:34:48 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:48 np0005592157 systemd[1]: var-lib-containers-storage-overlay-a090ee6320975be79322584eb7c1ca7c8762abed786a2de4255357239adb3147-merged.mount: Deactivated successfully.
Jan 22 10:34:48 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:48 np0005592157 podman[360589]: 2026-01-22 15:34:48.8745935 +0000 UTC m=+1.048063495 container remove c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:34:48 np0005592157 systemd[1]: libpod-conmon-c9161f157c1895abbcb6529d852bf2eb656e2a3da970e7d280cd4c6f960a7360.scope: Deactivated successfully.
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.467446513 +0000 UTC m=+0.035554222 container create e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:34:49 np0005592157 systemd[1]: Started libpod-conmon-e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18.scope.
Jan 22 10:34:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.525564294 +0000 UTC m=+0.093672013 container init e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.533514041 +0000 UTC m=+0.101621750 container start e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:34:49 np0005592157 elegant_volhard[360785]: 167 167
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.538282699 +0000 UTC m=+0.106390438 container attach e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:49 np0005592157 systemd[1]: libpod-e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18.scope: Deactivated successfully.
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.540214937 +0000 UTC m=+0.108322646 container died e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.452545384 +0000 UTC m=+0.020653113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8160d2b0a87012db18aa93d38505df8e31ddc49cf855316402e6e4f7ff088541-merged.mount: Deactivated successfully.
Jan 22 10:34:49 np0005592157 podman[360768]: 2026-01-22 15:34:49.585611062 +0000 UTC m=+0.153718781 container remove e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:34:49 np0005592157 systemd[1]: libpod-conmon-e5f6ec556e206e0c9ce53fb01263d8be57b313f16109a21819c08f65c11bdf18.scope: Deactivated successfully.
Jan 22 10:34:49 np0005592157 podman[360782]: 2026-01-22 15:34:49.607350951 +0000 UTC m=+0.098156594 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:49 np0005592157 podman[360832]: 2026-01-22 15:34:49.747659888 +0000 UTC m=+0.048461632 container create 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:34:49 np0005592157 systemd[1]: Started libpod-conmon-582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1.scope.
Jan 22 10:34:49 np0005592157 podman[360832]: 2026-01-22 15:34:49.727224962 +0000 UTC m=+0.028026706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:34:49 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:34:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e196dce703ebfeaa00a05f2fd0b3b9dd7469e43b1c5856f63fd2bc29f709180/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e196dce703ebfeaa00a05f2fd0b3b9dd7469e43b1c5856f63fd2bc29f709180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e196dce703ebfeaa00a05f2fd0b3b9dd7469e43b1c5856f63fd2bc29f709180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:49 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e196dce703ebfeaa00a05f2fd0b3b9dd7469e43b1c5856f63fd2bc29f709180/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:34:49 np0005592157 podman[360832]: 2026-01-22 15:34:49.850713382 +0000 UTC m=+0.151515196 container init 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:34:49 np0005592157 podman[360832]: 2026-01-22 15:34:49.86069787 +0000 UTC m=+0.161499634 container start 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:34:49 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:49 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:49 np0005592157 podman[360832]: 2026-01-22 15:34:49.867114139 +0000 UTC m=+0.167915893 container attach 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:34:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]: {
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:        "osd_id": 0,
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:        "type": "bluestore"
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]:    }
Jan 22 10:34:50 np0005592157 funny_engelbart[360848]: }
Jan 22 10:34:50 np0005592157 systemd[1]: libpod-582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1.scope: Deactivated successfully.
Jan 22 10:34:50 np0005592157 podman[360832]: 2026-01-22 15:34:50.79561499 +0000 UTC m=+1.096416734 container died 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:34:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1e196dce703ebfeaa00a05f2fd0b3b9dd7469e43b1c5856f63fd2bc29f709180-merged.mount: Deactivated successfully.
Jan 22 10:34:50 np0005592157 podman[360832]: 2026-01-22 15:34:50.859305968 +0000 UTC m=+1.160107702 container remove 582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:34:50 np0005592157 systemd[1]: libpod-conmon-582e153367e0d3bab8d498e82876e4593b1663d1932306c71f465cdc4791b3b1.scope: Deactivated successfully.
Jan 22 10:34:50 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:34:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6fbb2bae-ccb2-4669-a7fd-3d530340c225 does not exist
Jan 22 10:34:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a6a18a66-3193-45aa-89ba-03c7dfc7d356 does not exist
Jan 22 10:34:50 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev b4b91bf9-52bd-475c-9cca-8ffb558d7fe7 does not exist
Jan 22 10:34:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:53 np0005592157 ceph-mon[74359]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:53 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:54 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:54 np0005592157 ceph-mon[74359]: Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:55 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:55 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:56.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:56 np0005592157 ceph-mon[74359]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:57 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:34:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:34:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:58.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:34:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:34:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:58 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:58 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:59 np0005592157 ceph-mon[74359]: Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:59 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:00.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:00 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:01 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:35:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:02.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:35:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:02 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:04 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:04.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:05 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:05 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:35:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:35:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:06 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:06.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:07 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:08.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:08 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:09 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:09 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:10.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:10 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:11 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:13 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:13 np0005592157 podman[360995]: 2026-01-22 15:35:13.354327175 +0000 UTC m=+0.076697862 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:35:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:14 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:15 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:17 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:17 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:18 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:35:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:35:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:35:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:18.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:35:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:19 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:19 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:20 np0005592157 podman[361069]: 2026-01-22 15:35:20.389639221 +0000 UTC m=+0.112924980 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:21 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:22 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:23 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:24 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:24.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:25 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:25 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:26.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:26 np0005592157 ceph-mon[74359]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:26.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:27 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:28.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:28.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:28 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:29 np0005592157 ceph-mon[74359]: Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:29 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:30.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:30.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:32.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:34 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:34.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:35 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:35 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:36 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:37 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:38 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:39 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:39 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:40 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:42 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:42.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:43 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:43 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:44 np0005592157 podman[361157]: 2026-01-22 15:35:44.315165264 +0000 UTC m=+0.053197101 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:35:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:44.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:44 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:44 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:45 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:46.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:35:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:35:46 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:35:47
Jan 22 10:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes']
Jan 22 10:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:35:47.675 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:35:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:35:47.677 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:35:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:35:47.677 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:35:47 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:48 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:50 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:50 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:51 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:51 np0005592157 podman[361179]: 2026-01-22 15:35:51.341721034 +0000 UTC m=+0.082150807 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:52 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:52.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:35:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:35:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.761300845 +0000 UTC m=+0.051900927 container create 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:35:52 np0005592157 systemd[1]: Started libpod-conmon-70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b.scope.
Jan 22 10:35:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.744212272 +0000 UTC m=+0.034812374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.848073336 +0000 UTC m=+0.138673418 container init 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.8563086 +0000 UTC m=+0.146908682 container start 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:35:52 np0005592157 confident_feistel[361493]: 167 167
Jan 22 10:35:52 np0005592157 systemd[1]: libpod-70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b.scope: Deactivated successfully.
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.863775645 +0000 UTC m=+0.154375757 container attach 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.864180555 +0000 UTC m=+0.154780637 container died 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:35:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d0592d56f7e216cc22e2eb02bde2e1d5770569c1d649ad253917739ad407b0d4-merged.mount: Deactivated successfully.
Jan 22 10:35:52 np0005592157 podman[361477]: 2026-01-22 15:35:52.911017246 +0000 UTC m=+0.201617348 container remove 70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:52 np0005592157 systemd[1]: libpod-conmon-70058b5dbfcd20f423614c7b644a01c2722185de636125cb249d003d133c8a8b.scope: Deactivated successfully.
Jan 22 10:35:53 np0005592157 podman[361517]: 2026-01-22 15:35:53.076900517 +0000 UTC m=+0.042669469 container create ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:35:53 np0005592157 systemd[1]: Started libpod-conmon-ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d.scope.
Jan 22 10:35:53 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cedbe112341e95f98eac0b94e5c1849ef8033e1ab8c8f847cb90082c9b4345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cedbe112341e95f98eac0b94e5c1849ef8033e1ab8c8f847cb90082c9b4345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cedbe112341e95f98eac0b94e5c1849ef8033e1ab8c8f847cb90082c9b4345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cedbe112341e95f98eac0b94e5c1849ef8033e1ab8c8f847cb90082c9b4345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592157 podman[361517]: 2026-01-22 15:35:53.059056985 +0000 UTC m=+0.024825927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:53 np0005592157 podman[361517]: 2026-01-22 15:35:53.15772051 +0000 UTC m=+0.123489772 container init ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:53 np0005592157 podman[361517]: 2026-01-22 15:35:53.16619679 +0000 UTC m=+0.131965722 container start ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:35:53 np0005592157 podman[361517]: 2026-01-22 15:35:53.170165038 +0000 UTC m=+0.135933990 container attach ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:35:53 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:53 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:54 np0005592157 nice_spence[361533]: [
Jan 22 10:35:54 np0005592157 nice_spence[361533]:    {
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "available": false,
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "ceph_device": false,
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "lsm_data": {},
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "lvs": [],
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "path": "/dev/sr0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "rejected_reasons": [
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "Has a FileSystem",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "Insufficient space (<5GB)"
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        ],
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        "sys_api": {
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "actuators": null,
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "device_nodes": "sr0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "devname": "sr0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "human_readable_size": "482.00 KB",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "id_bus": "ata",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "model": "QEMU DVD-ROM",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "nr_requests": "2",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "parent": "/dev/sr0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "partitions": {},
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "path": "/dev/sr0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "removable": "1",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "rev": "2.5+",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "ro": "0",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "rotational": "1",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "sas_address": "",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "sas_device_handle": "",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "scheduler_mode": "mq-deadline",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "sectors": 0,
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "sectorsize": "2048",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "size": 493568.0,
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "support_discard": "2048",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "type": "disk",
Jan 22 10:35:54 np0005592157 nice_spence[361533]:            "vendor": "QEMU"
Jan 22 10:35:54 np0005592157 nice_spence[361533]:        }
Jan 22 10:35:54 np0005592157 nice_spence[361533]:    }
Jan 22 10:35:54 np0005592157 nice_spence[361533]: ]
Jan 22 10:35:54 np0005592157 systemd[1]: libpod-ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d.scope: Deactivated successfully.
Jan 22 10:35:54 np0005592157 systemd[1]: libpod-ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d.scope: Consumed 1.244s CPU time.
Jan 22 10:35:54 np0005592157 podman[362866]: 2026-01-22 15:35:54.445999918 +0000 UTC m=+0.024133829 container died ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:35:54 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c3cedbe112341e95f98eac0b94e5c1849ef8033e1ab8c8f847cb90082c9b4345-merged.mount: Deactivated successfully.
Jan 22 10:35:54 np0005592157 podman[362866]: 2026-01-22 15:35:54.493332831 +0000 UTC m=+0.071466712 container remove ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:35:54 np0005592157 systemd[1]: libpod-conmon-ba487d61eaee197db14627aca7f88e08da45b81abd0ad6c00d079528f1cc943d.scope: Deactivated successfully.
Jan 22 10:35:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:54.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f5bad7b6-cd33-424a-8313-80b5fd764e0f does not exist
Jan 22 10:35:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev be5ad408-d3b5-4ebe-9c9a-42aff6e1bf4d does not exist
Jan 22 10:35:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f13b4145-dc19-4338-aece-a1904ebbfb32 does not exist
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:35:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.506612772 +0000 UTC m=+0.047132049 container create 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:35:55 np0005592157 systemd[1]: Started libpod-conmon-36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1.scope.
Jan 22 10:35:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.581277673 +0000 UTC m=+0.121796980 container init 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.490731849 +0000 UTC m=+0.031251136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.591914117 +0000 UTC m=+0.132433384 container start 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.59648555 +0000 UTC m=+0.137004857 container attach 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:35:55 np0005592157 magical_hellman[363037]: 167 167
Jan 22 10:35:55 np0005592157 systemd[1]: libpod-36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1.scope: Deactivated successfully.
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.600159771 +0000 UTC m=+0.140679048 container died 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-54f36539380eed129644c095e9e64a409369332a36e94294d07a413716c99c9a-merged.mount: Deactivated successfully.
Jan 22 10:35:55 np0005592157 podman[363021]: 2026-01-22 15:35:55.645686689 +0000 UTC m=+0.186205976 container remove 36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 22 10:35:55 np0005592157 systemd[1]: libpod-conmon-36e629b4c99fb2c3c4eb715b1fea835d3b2be8a07d090922ecac7b6828f772d1.scope: Deactivated successfully.
Jan 22 10:35:55 np0005592157 podman[363063]: 2026-01-22 15:35:55.807820347 +0000 UTC m=+0.048122043 container create 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:35:55 np0005592157 systemd[1]: Started libpod-conmon-36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8.scope.
Jan 22 10:35:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:55 np0005592157 podman[363063]: 2026-01-22 15:35:55.784436288 +0000 UTC m=+0.024738024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:55 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:55 np0005592157 podman[363063]: 2026-01-22 15:35:55.891138442 +0000 UTC m=+0.131440148 container init 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 10:35:55 np0005592157 podman[363063]: 2026-01-22 15:35:55.899362786 +0000 UTC m=+0.139664472 container start 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:55 np0005592157 podman[363063]: 2026-01-22 15:35:55.902659208 +0000 UTC m=+0.142960914 container attach 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:35:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:56 np0005592157 vigilant_einstein[363081]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:35:56 np0005592157 vigilant_einstein[363081]: --> relative data size: 1.0
Jan 22 10:35:56 np0005592157 vigilant_einstein[363081]: --> All data devices are unavailable
Jan 22 10:35:56 np0005592157 systemd[1]: libpod-36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8.scope: Deactivated successfully.
Jan 22 10:35:56 np0005592157 podman[363063]: 2026-01-22 15:35:56.728212018 +0000 UTC m=+0.968513714 container died 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:35:56 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d25e7d1c23b36daa6be83b7d5e4e70e48853e2eac078556429998f6de33ee4ba-merged.mount: Deactivated successfully.
Jan 22 10:35:56 np0005592157 podman[363063]: 2026-01-22 15:35:56.790657716 +0000 UTC m=+1.030959402 container remove 36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:35:56 np0005592157 systemd[1]: libpod-conmon-36262c855faf343d8f9fa1d65b1cf4f46ef746ceeb3dc4909069cb806cfb48c8.scope: Deactivated successfully.
Jan 22 10:35:57 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.371599573 +0000 UTC m=+0.035116421 container create 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:35:57 np0005592157 systemd[1]: Started libpod-conmon-3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8.scope.
Jan 22 10:35:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.433098808 +0000 UTC m=+0.096615736 container init 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.438760938 +0000 UTC m=+0.102277786 container start 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:35:57 np0005592157 gracious_elion[363265]: 167 167
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.443956997 +0000 UTC m=+0.107473875 container attach 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:35:57 np0005592157 systemd[1]: libpod-3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8.scope: Deactivated successfully.
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.445425773 +0000 UTC m=+0.108942631 container died 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.356512149 +0000 UTC m=+0.020029017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-24763b06a944c7684877a7eb48adadc74c2388faadce0009ae539491704d8b9a-merged.mount: Deactivated successfully.
Jan 22 10:35:57 np0005592157 podman[363249]: 2026-01-22 15:35:57.482680636 +0000 UTC m=+0.146197494 container remove 3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:57 np0005592157 systemd[1]: libpod-conmon-3cb201c4906b78cc2d445e4d89184d54ba9303ae4f07553ead1a74d04ebc89d8.scope: Deactivated successfully.
Jan 22 10:35:57 np0005592157 podman[363290]: 2026-01-22 15:35:57.669616159 +0000 UTC m=+0.047315123 container create e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:35:57 np0005592157 systemd[1]: Started libpod-conmon-e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e.scope.
Jan 22 10:35:57 np0005592157 podman[363290]: 2026-01-22 15:35:57.64502173 +0000 UTC m=+0.022720724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf92e370f8c2b7d40a18f243468c696659312dbb3948af7308048f8f75eb357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf92e370f8c2b7d40a18f243468c696659312dbb3948af7308048f8f75eb357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf92e370f8c2b7d40a18f243468c696659312dbb3948af7308048f8f75eb357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:57 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf92e370f8c2b7d40a18f243468c696659312dbb3948af7308048f8f75eb357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:57 np0005592157 podman[363290]: 2026-01-22 15:35:57.766440399 +0000 UTC m=+0.144139423 container init e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:35:57 np0005592157 podman[363290]: 2026-01-22 15:35:57.775315809 +0000 UTC m=+0.153014773 container start e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:35:57 np0005592157 podman[363290]: 2026-01-22 15:35:57.779448151 +0000 UTC m=+0.157147215 container attach e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 10:35:58 np0005592157 ceph-mon[74359]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:58 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]: {
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:    "0": [
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:        {
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "devices": [
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "/dev/loop3"
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            ],
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "lv_name": "ceph_lv0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "lv_size": "7511998464",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "name": "ceph_lv0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "tags": {
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.cluster_name": "ceph",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.crush_device_class": "",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.encrypted": "0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.osd_id": "0",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.type": "block",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:                "ceph.vdo": "0"
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            },
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "type": "block",
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:            "vg_name": "ceph_vg0"
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:        }
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]:    ]
Jan 22 10:35:58 np0005592157 nervous_mendeleev[363306]: }
Jan 22 10:35:58 np0005592157 systemd[1]: libpod-e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e.scope: Deactivated successfully.
Jan 22 10:35:58 np0005592157 conmon[363306]: conmon e4d373d01d5649385b21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e.scope/container/memory.events
Jan 22 10:35:58 np0005592157 podman[363290]: 2026-01-22 15:35:58.502269225 +0000 UTC m=+0.879968169 container died e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 10:35:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:58.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:58 np0005592157 systemd[1]: var-lib-containers-storage-overlay-baf92e370f8c2b7d40a18f243468c696659312dbb3948af7308048f8f75eb357-merged.mount: Deactivated successfully.
Jan 22 10:35:58 np0005592157 podman[363290]: 2026-01-22 15:35:58.560057367 +0000 UTC m=+0.937756321 container remove e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 10:35:58 np0005592157 systemd[1]: libpod-conmon-e4d373d01d5649385b21fd8e49a5661a73af9b6cf921e29aa9fed455f3f0806e.scope: Deactivated successfully.
Jan 22 10:35:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:35:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:35:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:58.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:35:59 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.201918024 +0000 UTC m=+0.052861151 container create ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:59 np0005592157 systemd[1]: Started libpod-conmon-ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318.scope.
Jan 22 10:35:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.175809757 +0000 UTC m=+0.026752964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.281304381 +0000 UTC m=+0.132247498 container init ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.289439643 +0000 UTC m=+0.140382760 container start ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:35:59 np0005592157 quirky_jang[363484]: 167 167
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.29376315 +0000 UTC m=+0.144706277 container attach ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:35:59 np0005592157 systemd[1]: libpod-ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318.scope: Deactivated successfully.
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.295177415 +0000 UTC m=+0.146120532 container died ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:35:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f3841f6841eb572a2df739f12a2437a39b0ca2eb23b83018daf98b2f3155d15f-merged.mount: Deactivated successfully.
Jan 22 10:35:59 np0005592157 podman[363468]: 2026-01-22 15:35:59.33329383 +0000 UTC m=+0.184236947 container remove ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:35:59 np0005592157 systemd[1]: libpod-conmon-ad0d517276d530d4c42927d3296f43a8feb6463a41a3a28a1ba0bedb065df318.scope: Deactivated successfully.
Jan 22 10:35:59 np0005592157 podman[363507]: 2026-01-22 15:35:59.490233419 +0000 UTC m=+0.045549319 container create e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:35:59 np0005592157 systemd[1]: Started libpod-conmon-e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46.scope.
Jan 22 10:35:59 np0005592157 podman[363507]: 2026-01-22 15:35:59.468704146 +0000 UTC m=+0.024020066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e22365ad2f6744beb10abc4ad98104a0bf4d8dbdcf01a86f0c1e0a4c6041aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e22365ad2f6744beb10abc4ad98104a0bf4d8dbdcf01a86f0c1e0a4c6041aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e22365ad2f6744beb10abc4ad98104a0bf4d8dbdcf01a86f0c1e0a4c6041aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:59 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e22365ad2f6744beb10abc4ad98104a0bf4d8dbdcf01a86f0c1e0a4c6041aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:59 np0005592157 podman[363507]: 2026-01-22 15:35:59.599827536 +0000 UTC m=+0.155143466 container init e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:35:59 np0005592157 podman[363507]: 2026-01-22 15:35:59.605631409 +0000 UTC m=+0.160947299 container start e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:35:59 np0005592157 podman[363507]: 2026-01-22 15:35:59.609818223 +0000 UTC m=+0.165134103 container attach e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:36:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:00 np0005592157 great_greider[363524]: {
Jan 22 10:36:00 np0005592157 great_greider[363524]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:36:00 np0005592157 great_greider[363524]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:36:00 np0005592157 great_greider[363524]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:36:00 np0005592157 great_greider[363524]:        "osd_id": 0,
Jan 22 10:36:00 np0005592157 great_greider[363524]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:36:00 np0005592157 great_greider[363524]:        "type": "bluestore"
Jan 22 10:36:00 np0005592157 great_greider[363524]:    }
Jan 22 10:36:00 np0005592157 great_greider[363524]: }
Jan 22 10:36:00 np0005592157 systemd[1]: libpod-e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46.scope: Deactivated successfully.
Jan 22 10:36:00 np0005592157 podman[363507]: 2026-01-22 15:36:00.419279174 +0000 UTC m=+0.974595054 container died e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 10:36:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-45e22365ad2f6744beb10abc4ad98104a0bf4d8dbdcf01a86f0c1e0a4c6041aa-merged.mount: Deactivated successfully.
Jan 22 10:36:00 np0005592157 podman[363507]: 2026-01-22 15:36:00.482385168 +0000 UTC m=+1.037701048 container remove e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_greider, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 22 10:36:00 np0005592157 systemd[1]: libpod-conmon-e3cd5d544624d1f5d4520f63a8b3f436aab8d172ccab576e517560315b76cb46.scope: Deactivated successfully.
Jan 22 10:36:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:00.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:36:00 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1e7e273f-56f9-4bcb-bb8e-de0170ad7c9e does not exist
Jan 22 10:36:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 661ef1e6-c842-4323-a9af-b24195d60748 does not exist
Jan 22 10:36:00 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 48ad7b74-e0f9-46e5-b2e8-191c4b3e6cc9 does not exist
Jan 22 10:36:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:01 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:01 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:02 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:02.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:03 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:36:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:36:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:06.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:07 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:07 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:08 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:08.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:08.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:09 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:09 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:10 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:10.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:10.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:11 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:12 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:12.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:12.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:13 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:14 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:36:14 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:36:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:14 np0005592157 podman[363640]: 2026-01-22 15:36:14.496822459 +0000 UTC m=+0.094233176 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:36:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:14.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:14.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:14 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:15 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:16.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:16 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:17 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:18.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:18.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:18 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:19 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:19 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:20.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:20 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:22 np0005592157 podman[363690]: 2026-01-22 15:36:22.346039828 +0000 UTC m=+0.082364062 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:36:22 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:23 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:25 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:25 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:25 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 10:36:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 10:36:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:26.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:26 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:28.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:29 np0005592157 ceph-mon[74359]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:29 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:30 np0005592157 ceph-mon[74359]: Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:30 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:31 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:33 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:34.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:34 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592157 ceph-mon[74359]: Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:36 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:37 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:37 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:38 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:38.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:39 np0005592157 ceph-mon[74359]: Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:40.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:40 np0005592157 ceph-mon[74359]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:40 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:42 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:42.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:43 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:44.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:44 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592157 ceph-mon[74359]: Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:36:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:36:45 np0005592157 podman[363777]: 2026-01-22 15:36:45.357734612 +0000 UTC m=+0.083591663 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 22 10:36:45 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:45 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:36:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:36:47 np0005592157 ceph-mon[74359]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:36:47.678 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:36:47.679 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:36:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:36:47.679 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:36:47
Jan 22 10:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 22 10:36:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:36:48 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:49 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:50 np0005592157 ceph-mon[74359]: Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:50 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:52 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:53 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:53 np0005592157 podman[363800]: 2026-01-22 15:36:53.376351326 +0000 UTC m=+0.111090674 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:36:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:54 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:55 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:56 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:57 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:36:58 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:36:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:36:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:36:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:59 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:59 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Jan 22 10:37:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:00 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:01 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 22 10:37:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:02.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:02 np0005592157 podman[364050]: 2026-01-22 15:37:02.714258912 +0000 UTC m=+0.948881167 container exec 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:37:02 np0005592157 podman[364050]: 2026-01-22 15:37:02.83038575 +0000 UTC m=+1.065007905 container exec_died 07669b4a5faab602dbf0ccebe45bebb9e568b32d1f7b14cbfba8a4aa53d64f7e (image=quay.io/ceph/ceph:v18, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:37:02 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:03.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:03 np0005592157 podman[364208]: 2026-01-22 15:37:03.440656454 +0000 UTC m=+0.092088393 container exec 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:37:03 np0005592157 podman[364208]: 2026-01-22 15:37:03.44976899 +0000 UTC m=+0.101200909 container exec_died 25bfb4d65b305badcde39abfd60f99eca474f1e114cab6d678d4c2518a3f0761 (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-0-erkqlp)
Jan 22 10:37:03 np0005592157 podman[364275]: 2026-01-22 15:37:03.633107543 +0000 UTC m=+0.047991740 container exec f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 10:37:03 np0005592157 podman[364275]: 2026-01-22 15:37:03.646213468 +0000 UTC m=+0.061097635 container exec_died f1b4339123eae90ad9e7595784eaed429e4fdf9053229e1bf6b0e887e16d8688 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-0-hawera, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0ceac20d-3ffa-4171-a336-308764e41690 does not exist
Jan 22 10:37:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1dbfc4e2-5586-4ad6-8541-24d86b8f92e9 does not exist
Jan 22 10:37:04 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0c13d99b-5e0a-481a-a148-3643bbad7bf8 does not exist
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:37:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:05.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.231139768 +0000 UTC m=+0.036161667 container create eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:37:05 np0005592157 systemd[1]: Started libpod-conmon-eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2.scope.
Jan 22 10:37:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.308585057 +0000 UTC m=+0.113606966 container init eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.213981653 +0000 UTC m=+0.019003572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.315128959 +0000 UTC m=+0.120150858 container start eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.318206346 +0000 UTC m=+0.123228245 container attach eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:37:05 np0005592157 crazy_visvesvaraya[364597]: 167 167
Jan 22 10:37:05 np0005592157 systemd[1]: libpod-eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2.scope: Deactivated successfully.
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.322448591 +0000 UTC m=+0.127470490 container died eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:37:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-8f598ef71bea0ad5989c4e30255b29a6405036a743abd85ba34944c7648bd278-merged.mount: Deactivated successfully.
Jan 22 10:37:05 np0005592157 podman[364580]: 2026-01-22 15:37:05.370781369 +0000 UTC m=+0.175803268 container remove eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:37:05 np0005592157 systemd[1]: libpod-conmon-eee926b5b54470d528dc2a7b80008750b3849e584441aa3413ef6d08eebdb8c2.scope: Deactivated successfully.
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:37:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:37:05 np0005592157 podman[364622]: 2026-01-22 15:37:05.526761634 +0000 UTC m=+0.039500650 container create 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:37:05 np0005592157 systemd[1]: Started libpod-conmon-5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449.scope.
Jan 22 10:37:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:05 np0005592157 podman[364622]: 2026-01-22 15:37:05.509146098 +0000 UTC m=+0.021885144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:05 np0005592157 podman[364622]: 2026-01-22 15:37:05.612792306 +0000 UTC m=+0.125531332 container init 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:37:05 np0005592157 podman[364622]: 2026-01-22 15:37:05.621080502 +0000 UTC m=+0.133819518 container start 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:37:05 np0005592157 podman[364622]: 2026-01-22 15:37:05.624803844 +0000 UTC m=+0.137542860 container attach 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:37:06 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 10:37:06 np0005592157 strange_bassi[364637]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:37:06 np0005592157 strange_bassi[364637]: --> relative data size: 1.0
Jan 22 10:37:06 np0005592157 strange_bassi[364637]: --> All data devices are unavailable
Jan 22 10:37:06 np0005592157 systemd[1]: libpod-5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449.scope: Deactivated successfully.
Jan 22 10:37:06 np0005592157 podman[364622]: 2026-01-22 15:37:06.423435506 +0000 UTC m=+0.936174522 container died 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 10:37:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5b99637d77b9fe7bba271ace805883908beb78837c69753ee64a52d5b5a09394-merged.mount: Deactivated successfully.
Jan 22 10:37:06 np0005592157 podman[364622]: 2026-01-22 15:37:06.49379357 +0000 UTC m=+1.006532586 container remove 5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:37:06 np0005592157 systemd[1]: libpod-conmon-5be1907e653976536cc9ff7a0d6d518646a9799a4225ac64880e68d770c94449.scope: Deactivated successfully.
Jan 22 10:37:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:06.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.058857033 +0000 UTC m=+0.041441218 container create 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:37:07 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:07 np0005592157 systemd[1]: Started libpod-conmon-892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338.scope.
Jan 22 10:37:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.038039427 +0000 UTC m=+0.020623632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.148460734 +0000 UTC m=+0.131044939 container init 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.154155965 +0000 UTC m=+0.136740160 container start 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.157384595 +0000 UTC m=+0.139968800 container attach 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:37:07 np0005592157 jolly_hellman[364826]: 167 167
Jan 22 10:37:07 np0005592157 systemd[1]: libpod-892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338.scope: Deactivated successfully.
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.15919314 +0000 UTC m=+0.141777325 container died 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:37:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e1a7a35f752d02debc0cf0b4c5ae19ada85cb5fb8688ed294b81d8133008a029-merged.mount: Deactivated successfully.
Jan 22 10:37:07 np0005592157 podman[364809]: 2026-01-22 15:37:07.197877078 +0000 UTC m=+0.180461263 container remove 892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:37:07 np0005592157 systemd[1]: libpod-conmon-892c39941d18a60b7eb838a226c26bb48463ee3b080122ce8f97d608bd9f1338.scope: Deactivated successfully.
Jan 22 10:37:07 np0005592157 podman[364850]: 2026-01-22 15:37:07.344269387 +0000 UTC m=+0.036675361 container create 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 22 10:37:07 np0005592157 systemd[1]: Started libpod-conmon-3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39.scope.
Jan 22 10:37:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050582b6deb0765648e77b318a0db7cc157e1046584ca13d34f1e036473ed02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050582b6deb0765648e77b318a0db7cc157e1046584ca13d34f1e036473ed02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050582b6deb0765648e77b318a0db7cc157e1046584ca13d34f1e036473ed02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:07 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050582b6deb0765648e77b318a0db7cc157e1046584ca13d34f1e036473ed02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:07 np0005592157 podman[364850]: 2026-01-22 15:37:07.424575977 +0000 UTC m=+0.116981941 container init 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:37:07 np0005592157 podman[364850]: 2026-01-22 15:37:07.328808533 +0000 UTC m=+0.021214507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:07 np0005592157 podman[364850]: 2026-01-22 15:37:07.432226676 +0000 UTC m=+0.124632650 container start 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:37:07 np0005592157 podman[364850]: 2026-01-22 15:37:07.436459271 +0000 UTC m=+0.128865235 container attach 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:37:08 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]: {
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:    "0": [
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:        {
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "devices": [
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "/dev/loop3"
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            ],
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "lv_name": "ceph_lv0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "lv_size": "7511998464",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "name": "ceph_lv0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "tags": {
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.cluster_name": "ceph",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.crush_device_class": "",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.encrypted": "0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.osd_id": "0",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.type": "block",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:                "ceph.vdo": "0"
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            },
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "type": "block",
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:            "vg_name": "ceph_vg0"
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:        }
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]:    ]
Jan 22 10:37:08 np0005592157 elegant_dirac[364867]: }
Jan 22 10:37:08 np0005592157 systemd[1]: libpod-3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39.scope: Deactivated successfully.
Jan 22 10:37:08 np0005592157 podman[364850]: 2026-01-22 15:37:08.216582325 +0000 UTC m=+0.908988299 container died 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:37:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-050582b6deb0765648e77b318a0db7cc157e1046584ca13d34f1e036473ed02c-merged.mount: Deactivated successfully.
Jan 22 10:37:08 np0005592157 podman[364850]: 2026-01-22 15:37:08.270919582 +0000 UTC m=+0.963325546 container remove 3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:37:08 np0005592157 systemd[1]: libpod-conmon-3d83bc8881595e680a7ac0e9354d6fad7d40d35a19c09807b321b2d43d619b39.scope: Deactivated successfully.
Jan 22 10:37:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.835091994 +0000 UTC m=+0.036930076 container create 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:37:08 np0005592157 systemd[1]: Started libpod-conmon-25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817.scope.
Jan 22 10:37:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.908672398 +0000 UTC m=+0.110510500 container init 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.819731634 +0000 UTC m=+0.021569746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.917570818 +0000 UTC m=+0.119408920 container start 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:37:08 np0005592157 funny_meninsky[365047]: 167 167
Jan 22 10:37:08 np0005592157 systemd[1]: libpod-25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817.scope: Deactivated successfully.
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.921489465 +0000 UTC m=+0.123327567 container attach 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.922113511 +0000 UTC m=+0.123951593 container died 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:37:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5db892ca92dc0f9189af685f84c3de9d50de9ade6b246dbd5f4858c78d534006-merged.mount: Deactivated successfully.
Jan 22 10:37:08 np0005592157 podman[365031]: 2026-01-22 15:37:08.958210045 +0000 UTC m=+0.160048127 container remove 25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:37:08 np0005592157 systemd[1]: libpod-conmon-25dee4d342590bee6809b413ee74c5ea75f50062b5fd9bdcd21f970b70069817.scope: Deactivated successfully.
Jan 22 10:37:09 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:09 np0005592157 podman[365069]: 2026-01-22 15:37:09.137371716 +0000 UTC m=+0.065780552 container create 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:37:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:09 np0005592157 podman[365069]: 2026-01-22 15:37:09.093814246 +0000 UTC m=+0.022223162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:37:09 np0005592157 systemd[1]: Started libpod-conmon-8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7.scope.
Jan 22 10:37:09 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:37:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef5cea6ac0795a21c0253b66fc76224417081cd4c2ec30c70a9d185d3870acb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef5cea6ac0795a21c0253b66fc76224417081cd4c2ec30c70a9d185d3870acb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef5cea6ac0795a21c0253b66fc76224417081cd4c2ec30c70a9d185d3870acb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:09 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef5cea6ac0795a21c0253b66fc76224417081cd4c2ec30c70a9d185d3870acb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:37:09 np0005592157 podman[365069]: 2026-01-22 15:37:09.253083853 +0000 UTC m=+0.181492779 container init 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:37:09 np0005592157 podman[365069]: 2026-01-22 15:37:09.259641356 +0000 UTC m=+0.188050182 container start 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:37:09 np0005592157 podman[365069]: 2026-01-22 15:37:09.264060535 +0000 UTC m=+0.192469461 container attach 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 10:37:10 np0005592157 competent_williamson[365085]: {
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:        "osd_id": 0,
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:        "type": "bluestore"
Jan 22 10:37:10 np0005592157 competent_williamson[365085]:    }
Jan 22 10:37:10 np0005592157 competent_williamson[365085]: }
Jan 22 10:37:10 np0005592157 systemd[1]: libpod-8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7.scope: Deactivated successfully.
Jan 22 10:37:10 np0005592157 podman[365069]: 2026-01-22 15:37:10.069144858 +0000 UTC m=+0.997553694 container died 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:37:10 np0005592157 systemd[1]: var-lib-containers-storage-overlay-4ef5cea6ac0795a21c0253b66fc76224417081cd4c2ec30c70a9d185d3870acb-merged.mount: Deactivated successfully.
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:10 np0005592157 podman[365069]: 2026-01-22 15:37:10.130499889 +0000 UTC m=+1.058908725 container remove 8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:37:10 np0005592157 systemd[1]: libpod-conmon-8ac4d2499928ad3fe0189f3e85d1e1deb40ff6258a2bae06322aefd890bdf7b7.scope: Deactivated successfully.
Jan 22 10:37:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 0 B/s wr, 195 op/s
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:37:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ad66fc55-43ca-467d-ade6-f8c06360699c does not exist
Jan 22 10:37:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3ef1e521-d9c3-46e8-ab75-ef09725b474c does not exist
Jan 22 10:37:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5ccbf97f-3593-47af-ba4c-32d800ba15ef does not exist
Jan 22 10:37:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:11.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:11 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:11 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 22 10:37:12 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:13 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 22 10:37:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:14 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:14.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.003000074s ======
Jan 22 10:37:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Jan 22 10:37:15 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s
Jan 22 10:37:16 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592157 podman[365222]: 2026-01-22 15:37:16.354665703 +0000 UTC m=+0.069743640 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:17 np0005592157 ceph-mon[74359]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 10:37:18 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:18.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:19 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:19 np0005592157 ceph-mon[74359]: Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 10:37:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:20.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:20 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:21.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:21 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 22 10:37:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:22 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:23 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:24 np0005592157 podman[365246]: 2026-01-22 15:37:24.373217317 +0000 UTC m=+0.110898609 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:37:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:24.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:24 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592157 ceph-mon[74359]: Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:25.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:25 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:27.048 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:37:27 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:27.049 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:37:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:27.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:28 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:28.052 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:37:28 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:29.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:29 np0005592157 ceph-mon[74359]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:29 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:30 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:30 np0005592157 ceph-mon[74359]: Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:31.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:31 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:32 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:33 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:34 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:35 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:36 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:36.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:37 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #246. Immutable memtables: 0.
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.468301) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 155] Flushing memtable with next log file: 246
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258468543, "job": 155, "event": "flush_started", "num_memtables": 1, "num_entries": 2509, "num_deletes": 540, "total_data_size": 3451742, "memory_usage": 3515768, "flush_reason": "Manual Compaction"}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 155] Level-0 flush table #247: started
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258501601, "cf_name": "default", "job": 155, "event": "table_file_creation", "file_number": 247, "file_size": 3350889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 110158, "largest_seqno": 112666, "table_properties": {"data_size": 3340601, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32349, "raw_average_key_size": 23, "raw_value_size": 3316033, "raw_average_value_size": 2397, "num_data_blocks": 239, "num_entries": 1383, "num_filter_entries": 1383, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096088, "oldest_key_time": 1769096088, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 247, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 155] Flush lasted 33322 microseconds, and 16259 cpu microseconds.
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.501732) [db/flush_job.cc:967] [default] [JOB 155] Level-0 flush table #247: 3350889 bytes OK
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.501775) [db/memtable_list.cc:519] [default] Level-0 commit table #247 started
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.504620) [db/memtable_list.cc:722] [default] Level-0 commit table #247: memtable #1 done
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.504661) EVENT_LOG_v1 {"time_micros": 1769096258504653, "job": 155, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.504685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 155] Try to delete WAL files size 3440059, prev total WAL file size 3440059, number of live WAL files 2.
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000243.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.506515) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 156] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 155 Base level 0, inputs: [247(3272KB)], [245(10MB)]
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258506866, "job": 156, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [247], "files_L6": [245], "score": -1, "input_data_size": 14681576, "oldest_snapshot_seqno": -1}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 156] Generated table #248: 14551 keys, 12845800 bytes, temperature: kUnknown
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258646549, "cf_name": "default", "job": 156, "event": "table_file_creation", "file_number": 248, "file_size": 12845800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12765483, "index_size": 42828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 398963, "raw_average_key_size": 27, "raw_value_size": 12517614, "raw_average_value_size": 860, "num_data_blocks": 1556, "num_entries": 14551, "num_filter_entries": 14551, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 248, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.647072) [db/compaction/compaction_job.cc:1663] [default] [JOB 156] Compacted 1@0 + 1@6 files to L6 => 12845800 bytes
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.649004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.0 rd, 91.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 15648, records dropped: 1097 output_compression: NoCompression
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.649037) EVENT_LOG_v1 {"time_micros": 1769096258649022, "job": 156, "event": "compaction_finished", "compaction_time_micros": 139834, "compaction_time_cpu_micros": 63778, "output_level": 6, "num_output_files": 1, "total_output_size": 12845800, "num_input_records": 15648, "num_output_records": 14551, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000247.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258650659, "job": 156, "event": "table_file_deletion", "file_number": 247}
Jan 22 10:37:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:38.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000245.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258654840, "job": 156, "event": "table_file_deletion", "file_number": 245}
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.506244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.655013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.655021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.655024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.655027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:37:38.655030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:39 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:39 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:40 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:40.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:41.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:41 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:43 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:44.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:45 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:37:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:45.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:37:46 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:37:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:37:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 10:37:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 10:37:47 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:47 np0005592157 podman[365332]: 2026-01-22 15:37:47.33410583 +0000 UTC m=+0.057454295 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:47.679 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:47.680 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:37:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:37:47.680 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:37:47
Jan 22 10:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'cephfs.cephfs.data']
Jan 22 10:37:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:37:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:49 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:49 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:50 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:51 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:52 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:53 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:54 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:55 np0005592157 podman[365379]: 2026-01-22 15:37:55.257000795 +0000 UTC m=+0.088689539 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:37:56 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:37:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:37:57 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:57 np0005592157 ceph-mon[74359]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:37:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:58.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:59 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:37:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.0 MiB/s wr, 7 op/s
Jan 22 10:38:00 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592157 ceph-mon[74359]: Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:00.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 22 10:38:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 10:38:01 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 22 10:38:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:01 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 22 10:38:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 22 10:38:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 10:38:02 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 22 10:38:02 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 902 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 22 10:38:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:04 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:04 np0005592157 ceph-mon[74359]: 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 10:38:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002854441478224456 of space, bias 1.0, pg target 0.8420602360762145 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:38:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:38:05 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:05 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.6 MiB/s wr, 50 op/s
Jan 22 10:38:06 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:07 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.0 MiB/s wr, 38 op/s
Jan 22 10:38:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 10:38:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 22 10:38:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Jan 22 10:38:10 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:10.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:11.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:11 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:11 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:38:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 879 KiB/s wr, 36 op/s
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:12.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f1e77b26-006d-4578-8075-f14cd6b37ee3 does not exist
Jan 22 10:38:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 1bb0caa4-4e19-4d01-ab7b-e5cbaca78b71 does not exist
Jan 22 10:38:13 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af649a61-eff5-4a84-8907-2439258f480f does not exist
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:38:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:13.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:13 np0005592157 podman[365712]: 2026-01-22 15:38:13.781232493 +0000 UTC m=+0.025141864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:13 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:38:13 np0005592157 podman[365712]: 2026-01-22 15:38:13.924465803 +0000 UTC m=+0.168375144 container create 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:38:14 np0005592157 systemd[1]: Started libpod-conmon-2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550.scope.
Jan 22 10:38:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:14 np0005592157 podman[365712]: 2026-01-22 15:38:14.160242636 +0000 UTC m=+0.404151997 container init 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:38:14 np0005592157 podman[365712]: 2026-01-22 15:38:14.167833774 +0000 UTC m=+0.411743095 container start 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 10:38:14 np0005592157 epic_hypatia[365730]: 167 167
Jan 22 10:38:14 np0005592157 systemd[1]: libpod-2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550.scope: Deactivated successfully.
Jan 22 10:38:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 22 10:38:14 np0005592157 podman[365712]: 2026-01-22 15:38:14.216106881 +0000 UTC m=+0.460016232 container attach 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 10:38:14 np0005592157 podman[365712]: 2026-01-22 15:38:14.216540741 +0000 UTC m=+0.460450062 container died 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:38:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-de35c13ed2d2fc5b8d8fdc55ba675ca3de5e53ad3cf3dbc8df3baba9643c8380-merged.mount: Deactivated successfully.
Jan 22 10:38:14 np0005592157 podman[365712]: 2026-01-22 15:38:14.363475243 +0000 UTC m=+0.607384574 container remove 2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:38:14 np0005592157 systemd[1]: libpod-conmon-2f62d0a5849d8c43afc2ee84f8fd11d98ea70b44ab5eb16c153f7731176dc550.scope: Deactivated successfully.
Jan 22 10:38:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:14 np0005592157 podman[365754]: 2026-01-22 15:38:14.505637236 +0000 UTC m=+0.022757525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:14 np0005592157 podman[365754]: 2026-01-22 15:38:14.669752653 +0000 UTC m=+0.186872932 container create 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:38:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:14.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:14 np0005592157 systemd[1]: Started libpod-conmon-269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c.scope.
Jan 22 10:38:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:14 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:15 np0005592157 podman[365754]: 2026-01-22 15:38:15.025291733 +0000 UTC m=+0.542412032 container init 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:38:15 np0005592157 podman[365754]: 2026-01-22 15:38:15.033249431 +0000 UTC m=+0.550369700 container start 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:38:15 np0005592157 podman[365754]: 2026-01-22 15:38:15.054069697 +0000 UTC m=+0.571189976 container attach 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:38:15 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:15.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:15 np0005592157 ecstatic_wiles[365770]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:38:15 np0005592157 ecstatic_wiles[365770]: --> relative data size: 1.0
Jan 22 10:38:15 np0005592157 ecstatic_wiles[365770]: --> All data devices are unavailable
Jan 22 10:38:15 np0005592157 systemd[1]: libpod-269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c.scope: Deactivated successfully.
Jan 22 10:38:15 np0005592157 conmon[365770]: conmon 269cbf854c3762326546 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c.scope/container/memory.events
Jan 22 10:38:15 np0005592157 podman[365754]: 2026-01-22 15:38:15.875410743 +0000 UTC m=+1.392531132 container died 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 10:38:16 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0e46ab264439b9d83d185069c47980f937bf9db782e45088e6e2b48287e6b38b-merged.mount: Deactivated successfully.
Jan 22 10:38:16 np0005592157 podman[365754]: 2026-01-22 15:38:16.617170376 +0000 UTC m=+2.134290655 container remove 269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:16 np0005592157 systemd[1]: libpod-conmon-269cbf854c3762326546d5b4e1e9957734ab35bcf7fb169c9b6cf190bca94f9c.scope: Deactivated successfully.
Jan 22 10:38:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:17.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.333709344 +0000 UTC m=+0.068487218 container create 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:38:17 np0005592157 systemd[1]: Started libpod-conmon-987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701.scope.
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.290621207 +0000 UTC m=+0.025399101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.465529561 +0000 UTC m=+0.200307435 container init 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.473024547 +0000 UTC m=+0.207802411 container start 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:38:17 np0005592157 serene_solomon[366005]: 167 167
Jan 22 10:38:17 np0005592157 systemd[1]: libpod-987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701.scope: Deactivated successfully.
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.496689914 +0000 UTC m=+0.231467818 container attach 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.498253162 +0000 UTC m=+0.233031036 container died 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:38:17 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f48a6498062d0a593bf530bcbfd7005fbc0e95c8a23c2619c7fa22a72749d285-merged.mount: Deactivated successfully.
Jan 22 10:38:17 np0005592157 podman[365989]: 2026-01-22 15:38:17.533645969 +0000 UTC m=+0.268423843 container remove 987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:38:17 np0005592157 systemd[1]: libpod-conmon-987c87a84e41e42c522cf2f672ce570e9899484f8a5838a63e042507e6734701.scope: Deactivated successfully.
Jan 22 10:38:17 np0005592157 podman[366006]: 2026-01-22 15:38:17.545945664 +0000 UTC m=+0.143935868 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 10:38:17 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:17 np0005592157 podman[366048]: 2026-01-22 15:38:17.687455171 +0000 UTC m=+0.040728830 container create 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 10:38:17 np0005592157 systemd[1]: Started libpod-conmon-1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81.scope.
Jan 22 10:38:17 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:17 np0005592157 podman[366048]: 2026-01-22 15:38:17.670298636 +0000 UTC m=+0.023572315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f101b19c21399db0ef9d2e29311b764fabd38cf0b0ef65713a733b30b8214a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f101b19c21399db0ef9d2e29311b764fabd38cf0b0ef65713a733b30b8214a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f101b19c21399db0ef9d2e29311b764fabd38cf0b0ef65713a733b30b8214a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:17 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f101b19c21399db0ef9d2e29311b764fabd38cf0b0ef65713a733b30b8214a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:17 np0005592157 podman[366048]: 2026-01-22 15:38:17.785646085 +0000 UTC m=+0.138919814 container init 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:38:17 np0005592157 podman[366048]: 2026-01-22 15:38:17.797722174 +0000 UTC m=+0.150995873 container start 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Jan 22 10:38:17 np0005592157 podman[366048]: 2026-01-22 15:38:17.803697472 +0000 UTC m=+0.156971201 container attach 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:38:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]: {
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:    "0": [
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:        {
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "devices": [
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "/dev/loop3"
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            ],
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "lv_name": "ceph_lv0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "lv_size": "7511998464",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "name": "ceph_lv0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "tags": {
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.cluster_name": "ceph",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.crush_device_class": "",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.encrypted": "0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.osd_id": "0",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.type": "block",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:                "ceph.vdo": "0"
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            },
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "type": "block",
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:            "vg_name": "ceph_vg0"
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:        }
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]:    ]
Jan 22 10:38:18 np0005592157 wizardly_jennings[366064]: }
Jan 22 10:38:18 np0005592157 systemd[1]: libpod-1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81.scope: Deactivated successfully.
Jan 22 10:38:18 np0005592157 podman[366048]: 2026-01-22 15:38:18.571790057 +0000 UTC m=+0.925063766 container died 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:38:18 np0005592157 systemd[1]: var-lib-containers-storage-overlay-6f101b19c21399db0ef9d2e29311b764fabd38cf0b0ef65713a733b30b8214a1-merged.mount: Deactivated successfully.
Jan 22 10:38:18 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:18 np0005592157 podman[366048]: 2026-01-22 15:38:18.642558101 +0000 UTC m=+0.995831760 container remove 1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jennings, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:38:18 np0005592157 systemd[1]: libpod-conmon-1dacaf960b320b182ea2f59c199ff47fa1a6a293c02e905e4ac072c247fb7c81.scope: Deactivated successfully.
Jan 22 10:38:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.357942311 +0000 UTC m=+0.051350734 container create cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:38:19 np0005592157 systemd[1]: Started libpod-conmon-cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82.scope.
Jan 22 10:38:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.340656023 +0000 UTC m=+0.034064466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.440585759 +0000 UTC m=+0.133994212 container init cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:38:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.447821868 +0000 UTC m=+0.141230311 container start cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.451565611 +0000 UTC m=+0.144974074 container attach cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:38:19 np0005592157 busy_knuth[366242]: 167 167
Jan 22 10:38:19 np0005592157 systemd[1]: libpod-cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82.scope: Deactivated successfully.
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.454556405 +0000 UTC m=+0.147964838 container died cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:38:19 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e87916668b4c485164582d663cabd9b266288f58f37bb3aa1fa6eeee7502ac9e-merged.mount: Deactivated successfully.
Jan 22 10:38:19 np0005592157 podman[366226]: 2026-01-22 15:38:19.505357304 +0000 UTC m=+0.198765727 container remove cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:38:19 np0005592157 systemd[1]: libpod-conmon-cca864681e817dfa34335d3f81357e1e2d3f20c7fe8e8b93b15b5177d6095d82.scope: Deactivated successfully.
Jan 22 10:38:19 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:19 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:19 np0005592157 podman[366268]: 2026-01-22 15:38:19.76219256 +0000 UTC m=+0.072003796 container create 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:38:19 np0005592157 systemd[1]: Started libpod-conmon-2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d.scope.
Jan 22 10:38:19 np0005592157 podman[366268]: 2026-01-22 15:38:19.730107374 +0000 UTC m=+0.039918650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:38:19 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:38:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96db1a0f754279c3b0fe03804e53da5996c15253f794c21f74591eb50fe2fe65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96db1a0f754279c3b0fe03804e53da5996c15253f794c21f74591eb50fe2fe65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96db1a0f754279c3b0fe03804e53da5996c15253f794c21f74591eb50fe2fe65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:19 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96db1a0f754279c3b0fe03804e53da5996c15253f794c21f74591eb50fe2fe65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:38:19 np0005592157 podman[366268]: 2026-01-22 15:38:19.853102143 +0000 UTC m=+0.162913359 container init 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:38:19 np0005592157 podman[366268]: 2026-01-22 15:38:19.865406028 +0000 UTC m=+0.175217224 container start 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:38:19 np0005592157 podman[366268]: 2026-01-22 15:38:19.8691304 +0000 UTC m=+0.178941696 container attach 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:38:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:20 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]: {
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:        "osd_id": 0,
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:        "type": "bluestore"
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]:    }
Jan 22 10:38:20 np0005592157 hardcore_yonath[366286]: }
Jan 22 10:38:20 np0005592157 systemd[1]: libpod-2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d.scope: Deactivated successfully.
Jan 22 10:38:20 np0005592157 podman[366268]: 2026-01-22 15:38:20.77376127 +0000 UTC m=+1.083572466 container died 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:38:20 np0005592157 systemd[1]: var-lib-containers-storage-overlay-96db1a0f754279c3b0fe03804e53da5996c15253f794c21f74591eb50fe2fe65-merged.mount: Deactivated successfully.
Jan 22 10:38:20 np0005592157 podman[366268]: 2026-01-22 15:38:20.845527848 +0000 UTC m=+1.155339054 container remove 2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:38:20 np0005592157 systemd[1]: libpod-conmon-2f79fd1cd054be4f9f1f7b00ea4fb8f15c65ed3dc756d695f8b7729a6141144d.scope: Deactivated successfully.
Jan 22 10:38:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:38:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:38:20 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e430b64e-f024-4e4d-b6be-9e6311a2f2a4 does not exist
Jan 22 10:38:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7cf17bee-534f-4263-a4b2-fe3718bda53e does not exist
Jan 22 10:38:20 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 127ec13c-db6e-4c9c-9fae-f05b353743b5 does not exist
Jan 22 10:38:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:21 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:21 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:22 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:23 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:24.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:24 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:25 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:26 np0005592157 podman[366370]: 2026-01-22 15:38:26.448251822 +0000 UTC m=+0.169202084 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 22 10:38:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:26 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:27 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:29 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:29.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:30 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:30 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:30.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:31 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:31.236 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:38:31 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:31.237 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:38:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:32 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:32.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:33.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:34 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:34.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:35 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:36 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:36.239 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:38:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:36.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:37 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:38 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:38.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:39 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:39.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:40 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:40 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:40.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:41 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:42 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:43 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:44 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:44.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:45 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:38:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:38:46 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:47.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:47.681 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:47.681 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:38:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:38:47.681 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:38:47
Jan 22 10:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'volumes']
Jan 22 10:38:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:38:47 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:48 np0005592157 podman[366457]: 2026-01-22 15:38:48.309622765 +0000 UTC m=+0.050965164 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:38:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #249. Immutable memtables: 0.
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.715559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 157] Flushing memtable with next log file: 249
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329715617, "job": 157, "event": "flush_started", "num_memtables": 1, "num_entries": 1206, "num_deletes": 380, "total_data_size": 1412769, "memory_usage": 1446496, "flush_reason": "Manual Compaction"}
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 157] Level-0 flush table #250: started
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329826525, "cf_name": "default", "job": 157, "event": "table_file_creation", "file_number": 250, "file_size": 1389488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112668, "largest_seqno": 113872, "table_properties": {"data_size": 1384040, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16488, "raw_average_key_size": 22, "raw_value_size": 1371300, "raw_average_value_size": 1838, "num_data_blocks": 104, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 380, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096258, "oldest_key_time": 1769096258, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 250, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 157] Flush lasted 111434 microseconds, and 5011 cpu microseconds.
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.826919) [db/flush_job.cc:967] [default] [JOB 157] Level-0 flush table #250: 1389488 bytes OK
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.827116) [db/memtable_list.cc:519] [default] Level-0 commit table #250 started
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.850279) [db/memtable_list.cc:722] [default] Level-0 commit table #250: memtable #1 done
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.850337) EVENT_LOG_v1 {"time_micros": 1769096329850322, "job": 157, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.850371) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 157] Try to delete WAL files size 1406635, prev total WAL file size 1406635, number of live WAL files 2.
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000246.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.852062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035373930' seq:72057594037927935, type:22 .. '6C6F676D0036303433' seq:0, type:0; will stop at (end)
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 158] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 157 Base level 0, inputs: [250(1356KB)], [248(12MB)]
Jan 22 10:38:49 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329852152, "job": 158, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [250], "files_L6": [248], "score": -1, "input_data_size": 14235288, "oldest_snapshot_seqno": -1}
Jan 22 10:38:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 158] Generated table #251: 14518 keys, 14040801 bytes, temperature: kUnknown
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096330241645, "cf_name": "default", "job": 158, "event": "table_file_creation", "file_number": 251, "file_size": 14040801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959113, "index_size": 44263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 398852, "raw_average_key_size": 27, "raw_value_size": 13710217, "raw_average_value_size": 944, "num_data_blocks": 1614, "num_entries": 14518, "num_filter_entries": 14518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 251, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.242020) [db/compaction/compaction_job.cc:1663] [default] [JOB 158] Compacted 1@0 + 1@6 files to L6 => 14040801 bytes
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.328893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.5 rd, 36.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 15297, records dropped: 779 output_compression: NoCompression
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.328968) EVENT_LOG_v1 {"time_micros": 1769096330328922, "job": 158, "event": "compaction_finished", "compaction_time_micros": 389574, "compaction_time_cpu_micros": 54550, "output_level": 6, "num_output_files": 1, "total_output_size": 14040801, "num_input_records": 15297, "num_output_records": 14518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000250.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096330329490, "job": 158, "event": "table_file_deletion", "file_number": 250}
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000248.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096330334236, "job": 158, "event": "table_file_deletion", "file_number": 248}
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:49.851950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.334407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.334441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.334443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.334445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:38:50.334447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:51 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:52 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:53 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:54.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:55 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:38:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:38:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:56 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:57.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:57 np0005592157 podman[366530]: 2026-01-22 15:38:57.386740365 +0000 UTC m=+0.117564915 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:38:57 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:38:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:59 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:38:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:38:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:59.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:38:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:00 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:00 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:39:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:39:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:02 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:02.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:03 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:04 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:04.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:05.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:39:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:39:05 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:05 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:06 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:39:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:08.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:39:09 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:09.915 157426 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:39:09 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:09.917 157426 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:39:10 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:10 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:10.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:11 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:12 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:12.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:13 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:13 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:13.919 157426 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7335e41f-b1b8-4c04-9c19-8788162d5bb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:39:14 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:14.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:15 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:15 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:16.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:17 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:39:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:39:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:19 np0005592157 podman[366620]: 2026-01-22 15:39:19.309346947 +0000 UTC m=+0.049239662 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:39:19 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:20.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:39:21 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:23.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:23 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:24.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:25 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:25 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:39:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:25 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:39:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:39:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:26 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev af5ea906-5594-4493-9102-2efa3251df9d does not exist
Jan 22 10:39:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a8fe74ef-a31b-4a03-b39c-890186b99f5a does not exist
Jan 22 10:39:27 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 25fbb484-f3e4-48c8-b49c-3a2e9aee0a04 does not exist
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:39:27 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:39:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.651622673 +0000 UTC m=+0.033989303 container create dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:39:27 np0005592157 systemd[1]: Started libpod-conmon-dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305.scope.
Jan 22 10:39:27 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.704976406 +0000 UTC m=+0.087343046 container init dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.71202619 +0000 UTC m=+0.094392820 container start dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.716554043 +0000 UTC m=+0.098920693 container attach dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:39:27 np0005592157 quizzical_mirzakhani[367052]: 167 167
Jan 22 10:39:27 np0005592157 systemd[1]: libpod-dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305.scope: Deactivated successfully.
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.718250445 +0000 UTC m=+0.100617105 container died dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.635868583 +0000 UTC m=+0.018235233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:27 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f48e9608faa145db697df3681f044b334402b9a1d8b43223193c7692c661ef94-merged.mount: Deactivated successfully.
Jan 22 10:39:27 np0005592157 podman[367034]: 2026-01-22 15:39:27.765209998 +0000 UTC m=+0.147576628 container remove dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:39:27 np0005592157 systemd[1]: libpod-conmon-dcc2f245a74325f89dab5df66b119e59f8cc7ed51482772a72d88e659819d305.scope: Deactivated successfully.
Jan 22 10:39:27 np0005592157 podman[367048]: 2026-01-22 15:39:27.805873376 +0000 UTC m=+0.124280891 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:27.923813429 +0000 UTC m=+0.025122073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:39:28 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:28.116634527 +0000 UTC m=+0.217943171 container create 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:39:28 np0005592157 systemd[1]: Started libpod-conmon-300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521.scope.
Jan 22 10:39:28 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:28 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:28.20549359 +0000 UTC m=+0.306802204 container init 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:28.211917369 +0000 UTC m=+0.313225983 container start 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:28.214630317 +0000 UTC m=+0.315938951 container attach 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:39:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:28.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:28 np0005592157 magical_lewin[367119]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:39:28 np0005592157 magical_lewin[367119]: --> relative data size: 1.0
Jan 22 10:39:28 np0005592157 magical_lewin[367119]: --> All data devices are unavailable
Jan 22 10:39:28 np0005592157 systemd[1]: libpod-300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521.scope: Deactivated successfully.
Jan 22 10:39:28 np0005592157 podman[367102]: 2026-01-22 15:39:28.99107104 +0000 UTC m=+1.092379654 container died 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:39:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7c4bed9bb24041c0d38962e984b578b545c1a40d787605bcf5c6256b90c41975-merged.mount: Deactivated successfully.
Jan 22 10:39:29 np0005592157 podman[367102]: 2026-01-22 15:39:29.046153615 +0000 UTC m=+1.147462229 container remove 300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lewin, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:39:29 np0005592157 systemd[1]: libpod-conmon-300b123f8276f42a974d76055fe6bca45582a43cc4f17ca3381acba672722521.scope: Deactivated successfully.
Jan 22 10:39:29 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:29.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.640373951 +0000 UTC m=+0.037261634 container create a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:39:29 np0005592157 systemd[1]: Started libpod-conmon-a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50.scope.
Jan 22 10:39:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.707578187 +0000 UTC m=+0.104465900 container init a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.714981991 +0000 UTC m=+0.111869694 container start a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.718367254 +0000 UTC m=+0.115254957 container attach a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:39:29 np0005592157 loving_ride[367306]: 167 167
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.624137149 +0000 UTC m=+0.021024852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:29 np0005592157 systemd[1]: libpod-a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50.scope: Deactivated successfully.
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.720384154 +0000 UTC m=+0.117271847 container died a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:39:29 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2fb0afcd054c4178814c122dd79e0121273eaa2981922eb2743b567b5a1e4dbb-merged.mount: Deactivated successfully.
Jan 22 10:39:29 np0005592157 podman[367290]: 2026-01-22 15:39:29.761323849 +0000 UTC m=+0.158211542 container remove a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ride, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:39:29 np0005592157 systemd[1]: libpod-conmon-a132c7ea73134bc7b57c5996a64960b5b23157901aac4bf5bac04c02871bbc50.scope: Deactivated successfully.
Jan 22 10:39:29 np0005592157 podman[367330]: 2026-01-22 15:39:29.919750605 +0000 UTC m=+0.045209541 container create 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:39:29 np0005592157 systemd[1]: Started libpod-conmon-88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506.scope.
Jan 22 10:39:29 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9e8f4a82967791aae53107ad0b44feb53d4adc68c224494e2bf47f29d324a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9e8f4a82967791aae53107ad0b44feb53d4adc68c224494e2bf47f29d324a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9e8f4a82967791aae53107ad0b44feb53d4adc68c224494e2bf47f29d324a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:29 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9e8f4a82967791aae53107ad0b44feb53d4adc68c224494e2bf47f29d324a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:29 np0005592157 podman[367330]: 2026-01-22 15:39:29.995305798 +0000 UTC m=+0.120764764 container init 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 10:39:29 np0005592157 podman[367330]: 2026-01-22 15:39:29.902800185 +0000 UTC m=+0.028259141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:30 np0005592157 podman[367330]: 2026-01-22 15:39:30.001004519 +0000 UTC m=+0.126463455 container start 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:39:30 np0005592157 podman[367330]: 2026-01-22 15:39:30.004098226 +0000 UTC m=+0.129557182 container attach 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 22 10:39:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:30 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:30 np0005592157 ceph-mon[74359]: Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]: {
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:    "0": [
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:        {
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "devices": [
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "/dev/loop3"
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            ],
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "lv_name": "ceph_lv0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "lv_size": "7511998464",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "name": "ceph_lv0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "tags": {
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.cluster_name": "ceph",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.crush_device_class": "",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.encrypted": "0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.osd_id": "0",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.type": "block",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:                "ceph.vdo": "0"
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            },
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "type": "block",
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:            "vg_name": "ceph_vg0"
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:        }
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]:    ]
Jan 22 10:39:30 np0005592157 awesome_solomon[367346]: }
Jan 22 10:39:30 np0005592157 systemd[1]: libpod-88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506.scope: Deactivated successfully.
Jan 22 10:39:30 np0005592157 podman[367330]: 2026-01-22 15:39:30.757876396 +0000 UTC m=+0.883335332 container died 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 10:39:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:30.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:30 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5b9e8f4a82967791aae53107ad0b44feb53d4adc68c224494e2bf47f29d324a6-merged.mount: Deactivated successfully.
Jan 22 10:39:30 np0005592157 podman[367330]: 2026-01-22 15:39:30.820074938 +0000 UTC m=+0.945533874 container remove 88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 10:39:30 np0005592157 systemd[1]: libpod-conmon-88fa436887e738a01603cbbf73ec6204ba0f66a9b4ed2de1dc1706f0de06c506.scope: Deactivated successfully.
Jan 22 10:39:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:39:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.544009099 +0000 UTC m=+0.040134336 container create faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:39:31 np0005592157 systemd[1]: Started libpod-conmon-faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416.scope.
Jan 22 10:39:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.52669919 +0000 UTC m=+0.022824447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.629830956 +0000 UTC m=+0.125956203 container init faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:39:31 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.64088892 +0000 UTC m=+0.137014167 container start faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:39:31 np0005592157 quizzical_hawking[367527]: 167 167
Jan 22 10:39:31 np0005592157 systemd[1]: libpod-faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416.scope: Deactivated successfully.
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.645400572 +0000 UTC m=+0.141525819 container attach faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.645922325 +0000 UTC m=+0.142047562 container died faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:39:31 np0005592157 systemd[1]: var-lib-containers-storage-overlay-0002338204ceb6f74be12e2992f749e41a142ee0404539fce0bd450defb6dced-merged.mount: Deactivated successfully.
Jan 22 10:39:31 np0005592157 podman[367510]: 2026-01-22 15:39:31.677260692 +0000 UTC m=+0.173385929 container remove faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 22 10:39:31 np0005592157 systemd[1]: libpod-conmon-faa617ff21727a144f876e9648e0a129637f65d1b79b750abf70c9a6b6222416.scope: Deactivated successfully.
Jan 22 10:39:31 np0005592157 podman[367552]: 2026-01-22 15:39:31.87166254 +0000 UTC m=+0.064591372 container create 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 10:39:31 np0005592157 systemd[1]: Started libpod-conmon-88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469.scope.
Jan 22 10:39:31 np0005592157 podman[367552]: 2026-01-22 15:39:31.836519979 +0000 UTC m=+0.029448891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:31 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:39:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ac151f4db1716444292b5efb7e804fbb63ded56c21959c12b0e8fe7755ee7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ac151f4db1716444292b5efb7e804fbb63ded56c21959c12b0e8fe7755ee7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ac151f4db1716444292b5efb7e804fbb63ded56c21959c12b0e8fe7755ee7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:31 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ac151f4db1716444292b5efb7e804fbb63ded56c21959c12b0e8fe7755ee7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:31 np0005592157 podman[367552]: 2026-01-22 15:39:31.969183716 +0000 UTC m=+0.162112598 container init 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:39:31 np0005592157 podman[367552]: 2026-01-22 15:39:31.979690377 +0000 UTC m=+0.172619209 container start 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:39:31 np0005592157 podman[367552]: 2026-01-22 15:39:31.984900936 +0000 UTC m=+0.177829738 container attach 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 22 10:39:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:32 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:32.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]: {
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:        "osd_id": 0,
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:        "type": "bluestore"
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]:    }
Jan 22 10:39:32 np0005592157 affectionate_mayer[367570]: }
Jan 22 10:39:32 np0005592157 systemd[1]: libpod-88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469.scope: Deactivated successfully.
Jan 22 10:39:32 np0005592157 podman[367552]: 2026-01-22 15:39:32.844732396 +0000 UTC m=+1.037661198 container died 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:39:32 np0005592157 systemd[1]: var-lib-containers-storage-overlay-04ac151f4db1716444292b5efb7e804fbb63ded56c21959c12b0e8fe7755ee7e-merged.mount: Deactivated successfully.
Jan 22 10:39:32 np0005592157 podman[367552]: 2026-01-22 15:39:32.890461469 +0000 UTC m=+1.083390261 container remove 88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:39:32 np0005592157 systemd[1]: libpod-conmon-88e02ec32db1a184dc5b8c38f0870c3f2f0c37b4e2aebd5eb81e807fe180e469.scope: Deactivated successfully.
Jan 22 10:39:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:39:32 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:32 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:39:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 351356c3-40e7-4b45-9f60-8bad21f67e66 does not exist
Jan 22 10:39:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 52f75f2a-7a6b-4646-82fd-3a89967986a0 does not exist
Jan 22 10:39:33 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 22c2d302-f951-43db-a87c-a022d5a66081 does not exist
Jan 22 10:39:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:34.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:35.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:35 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:35 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:37 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:38.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:39 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:40 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:40 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:40.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:41 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:42 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:43.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:43 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:44 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:45.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:45 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:45 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:46 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:39:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:39:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:46.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:47 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:47.682 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:47.683 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:39:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:39:47.683 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:39:47
Jan 22 10:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'images', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Jan 22 10:39:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:39:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:48 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:48.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:49 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:50 np0005592157 podman[367712]: 2026-01-22 15:39:50.33577252 +0000 UTC m=+0.064457929 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 10:39:50 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:50 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:50.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:51.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:51 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:52 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:53 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:54 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:55.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:55 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:55 np0005592157 ceph-mon[74359]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:56.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:56 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:57 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:39:58 np0005592157 podman[367785]: 2026-01-22 15:39:58.393813743 +0000 UTC m=+0.123111962 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:39:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:58.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:59 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:39:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:39:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:59.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:39:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592157 ceph-mon[74359]: [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:40:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:01.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:40:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:02 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:03 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #252. Immutable memtables: 0.
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.825067) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 159] Flushing memtable with next log file: 252
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404825164, "job": 159, "event": "flush_started", "num_memtables": 1, "num_entries": 1243, "num_deletes": 384, "total_data_size": 1520635, "memory_usage": 1556920, "flush_reason": "Manual Compaction"}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 159] Level-0 flush table #253: started
Jan 22 10:40:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:40:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:04.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404836143, "cf_name": "default", "job": 159, "event": "table_file_creation", "file_number": 253, "file_size": 989946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 113873, "largest_seqno": 115115, "table_properties": {"data_size": 985175, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16963, "raw_average_key_size": 23, "raw_value_size": 973300, "raw_average_value_size": 1333, "num_data_blocks": 77, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 384, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096330, "oldest_key_time": 1769096330, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 253, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 159] Flush lasted 11116 microseconds, and 4911 cpu microseconds.
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.836197) [db/flush_job.cc:967] [default] [JOB 159] Level-0 flush table #253: 989946 bytes OK
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.836214) [db/memtable_list.cc:519] [default] Level-0 commit table #253 started
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837825) [db/memtable_list.cc:722] [default] Level-0 commit table #253: memtable #1 done
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837840) EVENT_LOG_v1 {"time_micros": 1769096404837835, "job": 159, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 159] Try to delete WAL files size 1514334, prev total WAL file size 1514334, number of live WAL files 2.
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000249.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.838671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 160] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 159 Base level 0, inputs: [253(966KB)], [251(13MB)]
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404838762, "job": 160, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [253], "files_L6": [251], "score": -1, "input_data_size": 15030747, "oldest_snapshot_seqno": -1}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 160] Generated table #254: 14497 keys, 11546804 bytes, temperature: kUnknown
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404951569, "cf_name": "default", "job": 160, "event": "table_file_creation", "file_number": 254, "file_size": 11546804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11468880, "index_size": 40563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 398183, "raw_average_key_size": 27, "raw_value_size": 11223890, "raw_average_value_size": 774, "num_data_blocks": 1460, "num_entries": 14497, "num_filter_entries": 14497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 254, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.951808) [db/compaction/compaction_job.cc:1663] [default] [JOB 160] Compacted 1@0 + 1@6 files to L6 => 11546804 bytes
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.953309) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.2 rd, 102.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(26.8) write-amplify(11.7) OK, records in: 15248, records dropped: 751 output_compression: NoCompression
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.953327) EVENT_LOG_v1 {"time_micros": 1769096404953318, "job": 160, "event": "compaction_finished", "compaction_time_micros": 112875, "compaction_time_cpu_micros": 40870, "output_level": 6, "num_output_files": 1, "total_output_size": 11546804, "num_input_records": 15248, "num_output_records": 14497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000253.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404953764, "job": 160, "event": "table_file_deletion", "file_number": 253}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000251.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404956127, "job": 160, "event": "table_file_deletion", "file_number": 251}
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.838555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:05.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:40:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:40:05 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:05 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:06 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:06.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:07 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:08.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:09 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:10 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:10 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:40:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:40:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:11.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:11 np0005592157 ceph-mon[74359]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:12 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:14 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:14.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:15 np0005592157 ceph-mon[74359]: Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:16 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:16.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:17 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:40:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:40:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:40:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:40:18 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:18.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:19 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:20.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:20 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:21 np0005592157 podman[367874]: 2026-01-22 15:40:21.325738577 +0000 UTC m=+0.055629519 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:40:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:21 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:22.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:22 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:24 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:40:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:24.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:40:25 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:25 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:25.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:26 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:27 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:28 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592157 podman[367895]: 2026-01-22 15:40:29.384897399 +0000 UTC m=+0.117392760 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:40:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:29.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:29 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:30.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:30 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:40:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #255. Immutable memtables: 0.
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.966433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 161] Flushing memtable with next log file: 255
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431966526, "job": 161, "event": "flush_started", "num_memtables": 1, "num_entries": 601, "num_deletes": 298, "total_data_size": 485655, "memory_usage": 497352, "flush_reason": "Manual Compaction"}
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 161] Level-0 flush table #256: started
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431972657, "cf_name": "default", "job": 161, "event": "table_file_creation", "file_number": 256, "file_size": 476871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115117, "largest_seqno": 115716, "table_properties": {"data_size": 473883, "index_size": 831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9239, "raw_average_key_size": 21, "raw_value_size": 467164, "raw_average_value_size": 1076, "num_data_blocks": 36, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 298, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096405, "oldest_key_time": 1769096405, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 256, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 161] Flush lasted 6300 microseconds, and 2916 cpu microseconds.
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.972746) [db/flush_job.cc:967] [default] [JOB 161] Level-0 flush table #256: 476871 bytes OK
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.972769) [db/memtable_list.cc:519] [default] Level-0 commit table #256 started
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975777) [db/memtable_list.cc:722] [default] Level-0 commit table #256: memtable #1 done
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975805) EVENT_LOG_v1 {"time_micros": 1769096431975797, "job": 161, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975828) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 161] Try to delete WAL files size 482158, prev total WAL file size 482158, number of live WAL files 2.
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000252.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976489) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 162] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 161 Base level 0, inputs: [256(465KB)], [254(11MB)]
Jan 22 10:40:31 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431976535, "job": 162, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [256], "files_L6": [254], "score": -1, "input_data_size": 12023675, "oldest_snapshot_seqno": -1}
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 162] Generated table #257: 14326 keys, 10215386 bytes, temperature: kUnknown
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432053101, "cf_name": "default", "job": 162, "event": "table_file_creation", "file_number": 257, "file_size": 10215386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139809, "index_size": 38664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 394916, "raw_average_key_size": 27, "raw_value_size": 9899028, "raw_average_value_size": 690, "num_data_blocks": 1380, "num_entries": 14326, "num_filter_entries": 14326, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 257, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.053366) [db/compaction/compaction_job.cc:1663] [default] [JOB 162] Compacted 1@0 + 1@6 files to L6 => 10215386 bytes
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.055047) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(46.6) write-amplify(21.4) OK, records in: 14931, records dropped: 605 output_compression: NoCompression
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.055066) EVENT_LOG_v1 {"time_micros": 1769096432055056, "job": 162, "event": "compaction_finished", "compaction_time_micros": 76650, "compaction_time_cpu_micros": 41808, "output_level": 6, "num_output_files": 1, "total_output_size": 10215386, "num_input_records": 14931, "num_output_records": 14326, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000256.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432055264, "job": 162, "event": "table_file_deletion", "file_number": 256}
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000254.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432057428, "job": 162, "event": "table_file_deletion", "file_number": 254}
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.057524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.057532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.057534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.057536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:40:32.057538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:32.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:32 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 10:40:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:33 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 10:40:33 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:34.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev c95f1a1a-a522-4dd9-98a1-3834bb3f0406 does not exist
Jan 22 10:40:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev fa203c59-2956-4580-a650-854ec1cf3fed does not exist
Jan 22 10:40:34 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7d63a443-fa88-4cbd-9015-712550cecd61 does not exist
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:40:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:35.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.57301491 +0000 UTC m=+0.041684404 container create 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:40:35 np0005592157 systemd[1]: Started libpod-conmon-999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1.scope.
Jan 22 10:40:35 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.556952502 +0000 UTC m=+0.025622016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.657623327 +0000 UTC m=+0.126292841 container init 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.664659961 +0000 UTC m=+0.133329455 container start 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.668748852 +0000 UTC m=+0.137418366 container attach 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:40:35 np0005592157 determined_poitras[368211]: 167 167
Jan 22 10:40:35 np0005592157 systemd[1]: libpod-999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1.scope: Deactivated successfully.
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.67226336 +0000 UTC m=+0.140932894 container died 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:40:35 np0005592157 systemd[1]: var-lib-containers-storage-overlay-33070ba395277c5f4562b59980d5b8ebc16addfcbdbe22689d69999edf30e32d-merged.mount: Deactivated successfully.
Jan 22 10:40:35 np0005592157 podman[368195]: 2026-01-22 15:40:35.721334306 +0000 UTC m=+0.190003800 container remove 999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:40:35 np0005592157 systemd[1]: libpod-conmon-999f0a3944bc3e0944082806fdc44037a680dbdb6deef89a414f9e16d79f0ec1.scope: Deactivated successfully.
Jan 22 10:40:35 np0005592157 podman[368236]: 2026-01-22 15:40:35.944029775 +0000 UTC m=+0.065305560 container create b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 10:40:35 np0005592157 systemd[1]: Started libpod-conmon-b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94.scope.
Jan 22 10:40:36 np0005592157 podman[368236]: 2026-01-22 15:40:35.907481979 +0000 UTC m=+0.028757824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:36 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:36 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:36 np0005592157 podman[368236]: 2026-01-22 15:40:36.038278561 +0000 UTC m=+0.159554386 container init b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:40:36 np0005592157 podman[368236]: 2026-01-22 15:40:36.044810993 +0000 UTC m=+0.166086728 container start b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:40:36 np0005592157 podman[368236]: 2026-01-22 15:40:36.048868683 +0000 UTC m=+0.170144428 container attach b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:40:36 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:36.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:36 np0005592157 epic_galileo[368253]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:40:36 np0005592157 epic_galileo[368253]: --> relative data size: 1.0
Jan 22 10:40:36 np0005592157 epic_galileo[368253]: --> All data devices are unavailable
Jan 22 10:40:36 np0005592157 systemd[1]: libpod-b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94.scope: Deactivated successfully.
Jan 22 10:40:36 np0005592157 podman[368236]: 2026-01-22 15:40:36.990623753 +0000 UTC m=+1.111899558 container died b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:40:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7626d8946182fce98c8cba95f2f3f61ce9ec4501932aa144a472d8fcd0e9fd89-merged.mount: Deactivated successfully.
Jan 22 10:40:37 np0005592157 podman[368236]: 2026-01-22 15:40:37.055260985 +0000 UTC m=+1.176536730 container remove b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:40:37 np0005592157 systemd[1]: libpod-conmon-b52e70dfe5e7c0d370ae6bfb5eca50016ce0127973b41798ab30877cc90b9c94.scope: Deactivated successfully.
Jan 22 10:40:37 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.693082513 +0000 UTC m=+0.039712106 container create 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:40:37 np0005592157 systemd[1]: Started libpod-conmon-6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660.scope.
Jan 22 10:40:37 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.762501263 +0000 UTC m=+0.109130886 container init 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.768196214 +0000 UTC m=+0.114825817 container start 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.672731718 +0000 UTC m=+0.019361331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:37 np0005592157 youthful_darwin[368489]: 167 167
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.772030829 +0000 UTC m=+0.118660452 container attach 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:40:37 np0005592157 systemd[1]: libpod-6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660.scope: Deactivated successfully.
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.773038144 +0000 UTC m=+0.119667757 container died 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:40:37 np0005592157 systemd[1]: var-lib-containers-storage-overlay-f23808dc289b51897c9e2f70e2f7fa585f7fe13b2e2ff1fe23e098dd71477276-merged.mount: Deactivated successfully.
Jan 22 10:40:37 np0005592157 podman[368472]: 2026-01-22 15:40:37.806645607 +0000 UTC m=+0.153275200 container remove 6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 10:40:37 np0005592157 systemd[1]: libpod-conmon-6d099df4237ca4d053942d6a6097449073b42d587485dba9b6146a2544bea660.scope: Deactivated successfully.
Jan 22 10:40:37 np0005592157 podman[368514]: 2026-01-22 15:40:37.980099436 +0000 UTC m=+0.050215286 container create 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 10:40:38 np0005592157 systemd[1]: Started libpod-conmon-4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440.scope.
Jan 22 10:40:38 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:37.9604889 +0000 UTC m=+0.030604790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7c97829b3b1dde68adc03d703bdd5c1f1ebe169a8a870ec0b283abc59c1b76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7c97829b3b1dde68adc03d703bdd5c1f1ebe169a8a870ec0b283abc59c1b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7c97829b3b1dde68adc03d703bdd5c1f1ebe169a8a870ec0b283abc59c1b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:38 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7c97829b3b1dde68adc03d703bdd5c1f1ebe169a8a870ec0b283abc59c1b76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:38.077145961 +0000 UTC m=+0.147261821 container init 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:38.08317323 +0000 UTC m=+0.153289080 container start 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:38.086956664 +0000 UTC m=+0.157072534 container attach 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 10:40:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:38 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]: {
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:    "0": [
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:        {
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "devices": [
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "/dev/loop3"
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            ],
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "lv_name": "ceph_lv0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "lv_size": "7511998464",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "name": "ceph_lv0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "tags": {
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.cluster_name": "ceph",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.crush_device_class": "",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.encrypted": "0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.osd_id": "0",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.type": "block",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:                "ceph.vdo": "0"
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            },
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "type": "block",
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:            "vg_name": "ceph_vg0"
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:        }
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]:    ]
Jan 22 10:40:38 np0005592157 frosty_roentgen[368530]: }
Jan 22 10:40:38 np0005592157 systemd[1]: libpod-4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440.scope: Deactivated successfully.
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:38.863861377 +0000 UTC m=+0.933977277 container died 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 10:40:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:38.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:38 np0005592157 systemd[1]: var-lib-containers-storage-overlay-5f7c97829b3b1dde68adc03d703bdd5c1f1ebe169a8a870ec0b283abc59c1b76-merged.mount: Deactivated successfully.
Jan 22 10:40:38 np0005592157 podman[368514]: 2026-01-22 15:40:38.918609484 +0000 UTC m=+0.988725334 container remove 4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:40:38 np0005592157 systemd[1]: libpod-conmon-4be7fc7146608edd67c1684193e5034e59bf9cb97b461bc08447bd60f458c440.scope: Deactivated successfully.
Jan 22 10:40:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:39 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.644966066 +0000 UTC m=+0.056669506 container create c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:40:39 np0005592157 systemd[1]: Started libpod-conmon-c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80.scope.
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.618194762 +0000 UTC m=+0.029898262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:39 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.729433589 +0000 UTC m=+0.141137119 container init c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.734957566 +0000 UTC m=+0.146661006 container start c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.738409372 +0000 UTC m=+0.150112842 container attach c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:40:39 np0005592157 stoic_blackwell[368708]: 167 167
Jan 22 10:40:39 np0005592157 systemd[1]: libpod-c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80.scope: Deactivated successfully.
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.741689463 +0000 UTC m=+0.153392883 container died c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:40:39 np0005592157 systemd[1]: var-lib-containers-storage-overlay-82dae612c3ae737674f7cfbd384ef2a328f6e773bf2ed3f711b707b5e0549141-merged.mount: Deactivated successfully.
Jan 22 10:40:39 np0005592157 podman[368691]: 2026-01-22 15:40:39.785349325 +0000 UTC m=+0.197052745 container remove c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:40:39 np0005592157 systemd[1]: libpod-conmon-c8615669d7fd32b315c03bfb23ed1934fb357bdc09c32e55e85b18498647da80.scope: Deactivated successfully.
Jan 22 10:40:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:39 np0005592157 podman[368733]: 2026-01-22 15:40:39.990059819 +0000 UTC m=+0.048281828 container create e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:40:40 np0005592157 systemd[1]: Started libpod-conmon-e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee.scope.
Jan 22 10:40:40 np0005592157 podman[368733]: 2026-01-22 15:40:39.970045163 +0000 UTC m=+0.028267192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:40:40 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:40:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d911d039a4923df18aca0ccccde08a880016ef094de4ca15995b3964475dbe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d911d039a4923df18aca0ccccde08a880016ef094de4ca15995b3964475dbe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d911d039a4923df18aca0ccccde08a880016ef094de4ca15995b3964475dbe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:40 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d911d039a4923df18aca0ccccde08a880016ef094de4ca15995b3964475dbe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:40:40 np0005592157 podman[368733]: 2026-01-22 15:40:40.106531915 +0000 UTC m=+0.164753964 container init e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:40:40 np0005592157 podman[368733]: 2026-01-22 15:40:40.11884405 +0000 UTC m=+0.177066069 container start e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:40:40 np0005592157 podman[368733]: 2026-01-22 15:40:40.122443829 +0000 UTC m=+0.180665948 container attach e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:40:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:40 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:40 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:40.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:41 np0005592157 nervous_moser[368749]: {
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:        "osd_id": 0,
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:        "type": "bluestore"
Jan 22 10:40:41 np0005592157 nervous_moser[368749]:    }
Jan 22 10:40:41 np0005592157 nervous_moser[368749]: }
Jan 22 10:40:41 np0005592157 systemd[1]: libpod-e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee.scope: Deactivated successfully.
Jan 22 10:40:41 np0005592157 conmon[368749]: conmon e2b8ae58bb2acd2d2bc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee.scope/container/memory.events
Jan 22 10:40:41 np0005592157 podman[368770]: 2026-01-22 15:40:41.135526807 +0000 UTC m=+0.029196894 container died e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 22 10:40:41 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1d911d039a4923df18aca0ccccde08a880016ef094de4ca15995b3964475dbe3-merged.mount: Deactivated successfully.
Jan 22 10:40:41 np0005592157 podman[368770]: 2026-01-22 15:40:41.206109786 +0000 UTC m=+0.099779823 container remove e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 10:40:41 np0005592157 systemd[1]: libpod-conmon-e2b8ae58bb2acd2d2bc45850e9593ffffa6b8f8a8adcc810ba8a0fd3478563ee.scope: Deactivated successfully.
Jan 22 10:40:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:40:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:40:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:42 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:42.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 8668dd27-0cbb-4c9a-9cb4-5817c6d09674 does not exist
Jan 22 10:40:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 97f5f32c-dc4f-42e6-9824-b54c4d34ac1d does not exist
Jan 22 10:40:43 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 6601be10-e962-4fdc-aea6-5543d7f2ea51 does not exist
Jan 22 10:40:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:43 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:43 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:44.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:44 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:45 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:45 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:40:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:40:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 10:40:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:46.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 10:40:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:40:47.683 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:40:47.685 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:40:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:40:47.685 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:40:47
Jan 22 10:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Jan 22 10:40:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:40:47 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:47 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:48 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:48.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:49.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:49 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:49 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:50.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:50 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:50 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:51.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:52 np0005592157 podman[368841]: 2026-01-22 15:40:52.348990662 +0000 UTC m=+0.084880975 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:40:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:53 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:54 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:54.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:55 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:55 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:56 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:56.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:57 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:40:58 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:58.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:40:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:40:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:40:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:00 np0005592157 podman[368914]: 2026-01-22 15:41:00.39958339 +0000 UTC m=+0.125699636 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:41:00 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:00 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:00.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:01 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:03 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:03.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:04 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:05 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:41:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:41:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:41:06 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:07 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:07.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:08 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:08.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:10 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:10.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:11 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:11 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:41:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:41:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:12 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:13 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:14 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:15 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:15 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:16 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:16.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:17 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:18 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:41:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:41:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:41:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:41:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:18.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:19 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:19 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:19 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:20 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:20.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:21 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:22 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:22.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:23 np0005592157 podman[369001]: 2026-01-22 15:41:23.304700891 +0000 UTC m=+0.049365235 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 10:41:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:23.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:23 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:24 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:24 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:24.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:25.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:25 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:25 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:27 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:27.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:28 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:29.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:29 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:29 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:29 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:41:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:41:30 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:30 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:31 np0005592157 podman[369025]: 2026-01-22 15:41:31.352736306 +0000 UTC m=+0.090595096 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:41:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:31 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:32.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:32 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:34 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:34 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:34 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:34.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:35 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:36 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:36 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:36.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:37 np0005592157 ceph-mon[74359]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:38 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:39 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:40 np0005592157 ceph-mon[74359]: Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:40 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:40 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:40 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:40 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:40.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:42 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:42 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:42 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:42 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:43 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:44 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:44 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:44 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:44.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:46 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 10:41:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:41:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:41:46 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:46 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:41:46 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:46.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4956ca1a-5fde-4f63-8ea0-f7e2b61ae87f does not exist
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f9f15bc0-d872-4a8d-acdf-2e016c0ee4d3 does not exist
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bda7e9ae-e502-47d5-a323-637f961227e3 does not exist
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:41:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:41:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 10:41:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:47.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 10:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:41:47.684 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:41:47.684 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:41:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:41:47.684 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:41:47
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', '.mgr']
Jan 22 10:41:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.87533083 +0000 UTC m=+0.053597609 container create 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:47 np0005592157 systemd[1]: Started libpod-conmon-4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d.scope.
Jan 22 10:41:47 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.846984908 +0000 UTC m=+0.025251667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.953757854 +0000 UTC m=+0.132024603 container init 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.964584292 +0000 UTC m=+0.142851041 container start 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.968224323 +0000 UTC m=+0.146491062 container attach 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:41:47 np0005592157 condescending_feistel[369399]: 167 167
Jan 22 10:41:47 np0005592157 systemd[1]: libpod-4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d.scope: Deactivated successfully.
Jan 22 10:41:47 np0005592157 podman[369381]: 2026-01-22 15:41:47.970797556 +0000 UTC m=+0.149064295 container died 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:41:47 np0005592157 systemd[1]: var-lib-containers-storage-overlay-975ed17f44640ce6ae8fbdae470a1f54249dd64a6851e9326f40c7d2404a1439-merged.mount: Deactivated successfully.
Jan 22 10:41:48 np0005592157 podman[369381]: 2026-01-22 15:41:48.012866889 +0000 UTC m=+0.191133628 container remove 4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:48 np0005592157 systemd[1]: libpod-conmon-4c0918d97b9632af45e6d27e1821698e28c2002da21d1c49197782f39879f15d.scope: Deactivated successfully.
Jan 22 10:41:48 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:41:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:48 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:41:48 np0005592157 podman[369423]: 2026-01-22 15:41:48.221835078 +0000 UTC m=+0.065312980 container create 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:48 np0005592157 systemd[1]: Started libpod-conmon-9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4.scope.
Jan 22 10:41:48 np0005592157 podman[369423]: 2026-01-22 15:41:48.202487928 +0000 UTC m=+0.045965870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:48 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:48 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:48 np0005592157 podman[369423]: 2026-01-22 15:41:48.319090108 +0000 UTC m=+0.162568100 container init 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:41:48 np0005592157 podman[369423]: 2026-01-22 15:41:48.336571281 +0000 UTC m=+0.180049193 container start 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 10:41:48 np0005592157 podman[369423]: 2026-01-22 15:41:48.340907089 +0000 UTC m=+0.184385011 container attach 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:41:48 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:48 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:48 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:49 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592157 hopeful_banach[369439]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:41:49 np0005592157 hopeful_banach[369439]: --> relative data size: 1.0
Jan 22 10:41:49 np0005592157 hopeful_banach[369439]: --> All data devices are unavailable
Jan 22 10:41:49 np0005592157 systemd[1]: libpod-9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4.scope: Deactivated successfully.
Jan 22 10:41:49 np0005592157 podman[369423]: 2026-01-22 15:41:49.23027236 +0000 UTC m=+1.073750262 container died 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:41:49 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9d044f325d26edc16e1cf9a6872b1087a8c123ccfa2ca509535fc182a9f306f5-merged.mount: Deactivated successfully.
Jan 22 10:41:49 np0005592157 podman[369423]: 2026-01-22 15:41:49.291492387 +0000 UTC m=+1.134970309 container remove 9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:41:49 np0005592157 systemd[1]: libpod-conmon-9408a090abc24902524f45d1c55d334899ea7a2a8d165c9534508b089e81e1f4.scope: Deactivated successfully.
Jan 22 10:41:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:49 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:49 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:49 np0005592157 podman[369605]: 2026-01-22 15:41:49.927905668 +0000 UTC m=+0.048542004 container create 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 10:41:49 np0005592157 systemd[1]: Started libpod-conmon-868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99.scope.
Jan 22 10:41:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:49.908278962 +0000 UTC m=+0.028915398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:50.019462967 +0000 UTC m=+0.140099333 container init 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:50.031371232 +0000 UTC m=+0.152007589 container start 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:41:50 np0005592157 goofy_mendel[369621]: 167 167
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:50.03530084 +0000 UTC m=+0.155937176 container attach 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 22 10:41:50 np0005592157 systemd[1]: libpod-868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99.scope: Deactivated successfully.
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:50.036463319 +0000 UTC m=+0.157099755 container died 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:41:50 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c4f3caede2e5dadade2f350d90ed93c1e666d05caccbc639a0ca69037a7a3774-merged.mount: Deactivated successfully.
Jan 22 10:41:50 np0005592157 podman[369605]: 2026-01-22 15:41:50.08938238 +0000 UTC m=+0.210018736 container remove 868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:41:50 np0005592157 systemd[1]: libpod-conmon-868c13f3c51b3fb56edcd9c22c1dbf748acce69987636505dc516fc89bae9d99.scope: Deactivated successfully.
Jan 22 10:41:50 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:50 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:50 np0005592157 podman[369645]: 2026-01-22 15:41:50.326279701 +0000 UTC m=+0.079714526 container create 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 10:41:50 np0005592157 podman[369645]: 2026-01-22 15:41:50.292543235 +0000 UTC m=+0.045978110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:50 np0005592157 systemd[1]: Started libpod-conmon-911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a.scope.
Jan 22 10:41:50 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca8a29828d8e58615b00e295a02a9c1b7958ebadb7de61d7137a033ac333dcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca8a29828d8e58615b00e295a02a9c1b7958ebadb7de61d7137a033ac333dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca8a29828d8e58615b00e295a02a9c1b7958ebadb7de61d7137a033ac333dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:50 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca8a29828d8e58615b00e295a02a9c1b7958ebadb7de61d7137a033ac333dcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:50 np0005592157 podman[369645]: 2026-01-22 15:41:50.463215235 +0000 UTC m=+0.216650110 container init 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 10:41:50 np0005592157 podman[369645]: 2026-01-22 15:41:50.473603402 +0000 UTC m=+0.227038217 container start 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:41:50 np0005592157 podman[369645]: 2026-01-22 15:41:50.478041222 +0000 UTC m=+0.231476087 container attach 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 10:41:50 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:50 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:50 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:50.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:51 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]: {
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:    "0": [
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:        {
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "devices": [
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "/dev/loop3"
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            ],
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "lv_name": "ceph_lv0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "lv_size": "7511998464",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "name": "ceph_lv0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "tags": {
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.cluster_name": "ceph",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.crush_device_class": "",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.encrypted": "0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.osd_id": "0",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.type": "block",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:                "ceph.vdo": "0"
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            },
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "type": "block",
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:            "vg_name": "ceph_vg0"
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:        }
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]:    ]
Jan 22 10:41:51 np0005592157 compassionate_heisenberg[369661]: }
Jan 22 10:41:51 np0005592157 systemd[1]: libpod-911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a.scope: Deactivated successfully.
Jan 22 10:41:51 np0005592157 podman[369645]: 2026-01-22 15:41:51.236694824 +0000 UTC m=+0.990129609 container died 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:41:51 np0005592157 systemd[1]: var-lib-containers-storage-overlay-eca8a29828d8e58615b00e295a02a9c1b7958ebadb7de61d7137a033ac333dcb-merged.mount: Deactivated successfully.
Jan 22 10:41:51 np0005592157 podman[369645]: 2026-01-22 15:41:51.294734223 +0000 UTC m=+1.048169018 container remove 911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 10:41:51 np0005592157 systemd[1]: libpod-conmon-911b3b0554acd1f8d5fd58ea63a45c2083941083954a5b80b3e443b9f9d9659a.scope: Deactivated successfully.
Jan 22 10:41:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.011147028 +0000 UTC m=+0.048719898 container create 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:41:52 np0005592157 systemd[1]: Started libpod-conmon-0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7.scope.
Jan 22 10:41:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:51.989093181 +0000 UTC m=+0.026666061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.102331408 +0000 UTC m=+0.139904268 container init 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.109609058 +0000 UTC m=+0.147181918 container start 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.115084564 +0000 UTC m=+0.152657434 container attach 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:41:52 np0005592157 funny_mahavira[369843]: 167 167
Jan 22 10:41:52 np0005592157 systemd[1]: libpod-0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7.scope: Deactivated successfully.
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.119585025 +0000 UTC m=+0.157157865 container died 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:41:52 np0005592157 systemd[1]: var-lib-containers-storage-overlay-94809806079af5286707f94262a401af80c174baa0a641c4d56ec68c63a3f015-merged.mount: Deactivated successfully.
Jan 22 10:41:52 np0005592157 podman[369827]: 2026-01-22 15:41:52.165527934 +0000 UTC m=+0.203100794 container remove 0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:41:52 np0005592157 systemd[1]: libpod-conmon-0f40dc1ae8618768d995a91cc337519afe74247c999cb8494d964d856e3c2fa7.scope: Deactivated successfully.
Jan 22 10:41:52 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:52 np0005592157 podman[369867]: 2026-01-22 15:41:52.388973952 +0000 UTC m=+0.051471007 container create 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 10:41:52 np0005592157 systemd[1]: Started libpod-conmon-764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9.scope.
Jan 22 10:41:52 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:41:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd5a10133c21b9cf2ec847758061a5c4b3063ea7a387c8771fc8d0169a64b51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd5a10133c21b9cf2ec847758061a5c4b3063ea7a387c8771fc8d0169a64b51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:52 np0005592157 podman[369867]: 2026-01-22 15:41:52.369310945 +0000 UTC m=+0.031808030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:41:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd5a10133c21b9cf2ec847758061a5c4b3063ea7a387c8771fc8d0169a64b51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:52 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd5a10133c21b9cf2ec847758061a5c4b3063ea7a387c8771fc8d0169a64b51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:41:52 np0005592157 podman[369867]: 2026-01-22 15:41:52.47160719 +0000 UTC m=+0.134104275 container init 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:52 np0005592157 podman[369867]: 2026-01-22 15:41:52.485057733 +0000 UTC m=+0.147554788 container start 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 10:41:52 np0005592157 podman[369867]: 2026-01-22 15:41:52.491026601 +0000 UTC m=+0.153523656 container attach 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 10:41:52 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:52 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:52 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:52.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:53 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]: {
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:        "osd_id": 0,
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:        "type": "bluestore"
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]:    }
Jan 22 10:41:53 np0005592157 modest_khayyam[369883]: }
Jan 22 10:41:53 np0005592157 systemd[1]: libpod-764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9.scope: Deactivated successfully.
Jan 22 10:41:53 np0005592157 podman[369867]: 2026-01-22 15:41:53.340696868 +0000 UTC m=+1.003193983 container died 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:41:53 np0005592157 systemd[1]: var-lib-containers-storage-overlay-9fd5a10133c21b9cf2ec847758061a5c4b3063ea7a387c8771fc8d0169a64b51-merged.mount: Deactivated successfully.
Jan 22 10:41:53 np0005592157 podman[369867]: 2026-01-22 15:41:53.40777274 +0000 UTC m=+1.070269785 container remove 764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:41:53 np0005592157 systemd[1]: libpod-conmon-764a172982e8e0cf748935748bb2684c69e9f819cddbe272b34ef16824de36e9.scope: Deactivated successfully.
Jan 22 10:41:53 np0005592157 podman[369905]: 2026-01-22 15:41:53.452279963 +0000 UTC m=+0.072213620 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 10:41:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:41:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:53 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:41:53 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 7edffd0e-e4ca-4110-87e2-a20b2d9260f4 does not exist
Jan 22 10:41:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d0dc3a70-2cf4-41a9-b185-91c435a55b32 does not exist
Jan 22 10:41:53 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f3127686-8e28-4905-bca6-729c5fa087ce does not exist
Jan 22 10:41:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:54 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:54 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:54 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:54 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:54.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:55 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:55 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:56 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:56 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:56 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:56 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:56.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:41:57 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 24K writes, 116K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 24K writes, 24K syncs, 1.00 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1841 writes, 9359 keys, 1839 commit groups, 1.0 writes per commit group, ingest: 11.31 MB, 0.02 MB/s#012Interval WAL: 1841 writes, 1839 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     51.7      2.57              0.56        81    0.032       0      0       0.0       0.0#012  L6      1/0    9.74 MB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   6.0    103.4     89.6      8.90              2.95        80    0.111    895K    49K       0.0       0.0#012 Sum      1/0    9.74 MB   0.0      0.9     0.1      0.8       0.9      0.1       0.0   7.0     80.3     81.1     11.47              3.51       161    0.071    895K    49K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     55.7     54.9      1.42              0.36        12    0.118     91K   5432       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   0.0    103.4     89.6      8.90              2.95        80    0.111    895K    49K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     51.8      2.56              0.56        80    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.130, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.91 GB write, 0.12 MB/s write, 0.90 GB read, 0.12 MB/s read, 11.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5595cc6c11f0#2 capacity: 304.00 MB usage: 93.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000538 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4946,88.53 MB,29.1226%) FilterBlock(162,2.43 MB,0.8008%) IndexBlock(162,2.94 MB,0.968542%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:41:57 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:41:58 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:58 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:41:58 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:58.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:41:59 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:41:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:59 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:59 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:00 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:00 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:00 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:00 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:00 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:00.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:01 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:01 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:02 np0005592157 podman[370039]: 2026-01-22 15:42:02.469671243 +0000 UTC m=+0.189076497 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 22 10:42:02 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:02 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:02 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:02 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:04 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:04 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:04 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:04 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:04 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:04 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:05 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:42:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:42:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:42:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:42:06 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:06 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:06 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:06 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:06 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:07 np0005592157 ceph-mon[74359]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:07.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:08 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:08 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:08 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:09 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:09.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:09 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:10 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:10 np0005592157 ceph-mon[74359]: Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:10 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:10 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:10 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:10.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:11.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:11 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:12 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:12 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:12 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:12.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:13 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:13.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:14 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:14 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:14 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:14 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:14 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:14 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:14 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:15 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:15 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:15.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:16 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:16 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:16 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:16 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:16.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:17.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:18 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592157 ceph-mon[74359]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:18 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:18 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:19.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:20 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:20 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:20 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:20 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:21 np0005592157 ceph-mon[74359]: Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:21 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:22 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:22 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:22 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:22 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:22.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:23 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:23.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:24 np0005592157 podman[370128]: 2026-01-22 15:42:24.314110757 +0000 UTC m=+0.054317787 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:42:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:24 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:24 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:24 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:25 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:25 np0005592157 ceph-mon[74359]: Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:25.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:26 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:26 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:26 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:26 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:27 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:28 np0005592157 ceph-mon[74359]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:28 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:28 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:28 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:28.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:29 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:30 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:30 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:42:30 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:42:31 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:31 np0005592157 ceph-mon[74359]: Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:32 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:32 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:32 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:32 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:32.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:33 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592157 podman[370152]: 2026-01-22 15:42:33.389391662 +0000 UTC m=+0.110078680 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 10:42:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:34 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:34 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:34 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:34 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:35 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:35 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:36 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:36 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:36 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:36 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:36.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:37 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:38 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:38 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:38 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:38 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:39 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:39.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:40 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:40 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:40.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:41 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:42 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:42:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:43.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:42:43 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:43.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:44 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:45 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:45.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:42:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:42:46 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:47.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:42:47
Jan 22 10:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'backups']
Jan 22 10:42:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:42:47.735 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:42:47.736 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:42:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:42:47.736 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:42:47 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:49 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:42:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:42:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:50 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:51 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:51.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:52 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:53 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:42:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 9fde1fbe-5761-4188-8c41-82bc54271688 does not exist
Jan 22 10:42:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e4ce1a64-fbcb-4d41-8389-cee7f36da4d4 does not exist
Jan 22 10:42:54 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev e93c86fd-a790-4a59-946a-5ec2a7414899 does not exist
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:42:54 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:42:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:55 np0005592157 podman[370395]: 2026-01-22 15:42:55.204477536 +0000 UTC m=+0.117851682 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:42:55 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.774174945 +0000 UTC m=+0.069444982 container create 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:42:55 np0005592157 systemd[1]: Started libpod-conmon-0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de.scope.
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.751044202 +0000 UTC m=+0.046314269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:42:55 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.869914148 +0000 UTC m=+0.165184205 container init 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.878148982 +0000 UTC m=+0.173419019 container start 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.882020428 +0000 UTC m=+0.177290465 container attach 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 10:42:55 np0005592157 clever_wilbur[370549]: 167 167
Jan 22 10:42:55 np0005592157 systemd[1]: libpod-0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de.scope: Deactivated successfully.
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.885540475 +0000 UTC m=+0.180810512 container died 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 10:42:55 np0005592157 systemd[1]: var-lib-containers-storage-overlay-2e781caa22169aea7cee210d2d5eced960ec0249439abd9d190c893771f44154-merged.mount: Deactivated successfully.
Jan 22 10:42:55 np0005592157 podman[370531]: 2026-01-22 15:42:55.933124215 +0000 UTC m=+0.228394242 container remove 0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:42:55 np0005592157 systemd[1]: libpod-conmon-0af742d462f3f03bd4e14efeeb3d8566146e5fcf53976a51908788c8c62d32de.scope: Deactivated successfully.
Jan 22 10:42:56 np0005592157 podman[370572]: 2026-01-22 15:42:56.150623615 +0000 UTC m=+0.039605833 container create 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:42:56 np0005592157 systemd[1]: Started libpod-conmon-63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b.scope.
Jan 22 10:42:56 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:42:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:56 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:56 np0005592157 podman[370572]: 2026-01-22 15:42:56.212836097 +0000 UTC m=+0.101818335 container init 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 10:42:56 np0005592157 podman[370572]: 2026-01-22 15:42:56.220787274 +0000 UTC m=+0.109769492 container start 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:42:56 np0005592157 podman[370572]: 2026-01-22 15:42:56.224220479 +0000 UTC m=+0.113202717 container attach 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 22 10:42:56 np0005592157 podman[370572]: 2026-01-22 15:42:56.133205283 +0000 UTC m=+0.022187521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:42:56 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:57 np0005592157 xenodochial_leakey[370588]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:42:57 np0005592157 xenodochial_leakey[370588]: --> relative data size: 1.0
Jan 22 10:42:57 np0005592157 xenodochial_leakey[370588]: --> All data devices are unavailable
Jan 22 10:42:57 np0005592157 systemd[1]: libpod-63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b.scope: Deactivated successfully.
Jan 22 10:42:57 np0005592157 podman[370572]: 2026-01-22 15:42:57.04578506 +0000 UTC m=+0.934767328 container died 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:42:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-bd5a6a133cad10708a8e2f9d35fb7db39dfc8a674438fbc9307182f1cd7fc633-merged.mount: Deactivated successfully.
Jan 22 10:42:57 np0005592157 podman[370572]: 2026-01-22 15:42:57.121374754 +0000 UTC m=+1.010356972 container remove 63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:42:57 np0005592157 systemd[1]: libpod-conmon-63c54c2df5748a88a6a02edb501ed03a2470c5eceb6c29b4cdbadd348879d29b.scope: Deactivated successfully.
Jan 22 10:42:57 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:42:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.804910653 +0000 UTC m=+0.050361569 container create f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:42:57 np0005592157 systemd[1]: Started libpod-conmon-f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb.scope.
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.783614005 +0000 UTC m=+0.029064941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:42:57 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.897023266 +0000 UTC m=+0.142474182 container init f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.9044396 +0000 UTC m=+0.149890496 container start f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 10:42:57 np0005592157 focused_montalcini[370826]: 167 167
Jan 22 10:42:57 np0005592157 systemd[1]: libpod-f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb.scope: Deactivated successfully.
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.908615813 +0000 UTC m=+0.154066729 container attach f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.909556936 +0000 UTC m=+0.155007942 container died f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:42:57 np0005592157 systemd[1]: var-lib-containers-storage-overlay-28f52ed71f84d5e375725e743f338dd9f9fe1e5118e515cdef940717d1b2fe65-merged.mount: Deactivated successfully.
Jan 22 10:42:57 np0005592157 podman[370792]: 2026-01-22 15:42:57.953656799 +0000 UTC m=+0.199107705 container remove f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_montalcini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:42:57 np0005592157 systemd[1]: libpod-conmon-f0a037ecf159fc4442faf75869ee98a03a5f9f4d88d314f49fc58a51116c0eeb.scope: Deactivated successfully.
Jan 22 10:42:58 np0005592157 podman[370849]: 2026-01-22 15:42:58.15460294 +0000 UTC m=+0.058531392 container create 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:42:58 np0005592157 systemd[1]: Started libpod-conmon-4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763.scope.
Jan 22 10:42:58 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:42:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cbfe3444fc7da8e0eb10f5900250d73be6faa0b98f5a46b7d87de6bc928cd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cbfe3444fc7da8e0eb10f5900250d73be6faa0b98f5a46b7d87de6bc928cd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cbfe3444fc7da8e0eb10f5900250d73be6faa0b98f5a46b7d87de6bc928cd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:58 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cbfe3444fc7da8e0eb10f5900250d73be6faa0b98f5a46b7d87de6bc928cd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:42:58 np0005592157 podman[370849]: 2026-01-22 15:42:58.129310723 +0000 UTC m=+0.033239265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:42:58 np0005592157 podman[370849]: 2026-01-22 15:42:58.22602746 +0000 UTC m=+0.129955952 container init 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:42:58 np0005592157 podman[370849]: 2026-01-22 15:42:58.237887964 +0000 UTC m=+0.141816416 container start 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:42:58 np0005592157 podman[370849]: 2026-01-22 15:42:58.241863872 +0000 UTC m=+0.145792324 container attach 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 22 10:42:58 np0005592157 ceph-mon[74359]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:42:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]: {
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:    "0": [
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:        {
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "devices": [
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "/dev/loop3"
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            ],
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "lv_name": "ceph_lv0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "lv_size": "7511998464",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "name": "ceph_lv0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "tags": {
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.cluster_name": "ceph",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.crush_device_class": "",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.encrypted": "0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.osd_id": "0",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.type": "block",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:                "ceph.vdo": "0"
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            },
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "type": "block",
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:            "vg_name": "ceph_vg0"
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:        }
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]:    ]
Jan 22 10:42:59 np0005592157 wizardly_dhawan[370866]: }
Jan 22 10:42:59 np0005592157 systemd[1]: libpod-4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763.scope: Deactivated successfully.
Jan 22 10:42:59 np0005592157 podman[370849]: 2026-01-22 15:42:59.064619383 +0000 UTC m=+0.968547855 container died 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 10:42:59 np0005592157 systemd[1]: var-lib-containers-storage-overlay-30cbfe3444fc7da8e0eb10f5900250d73be6faa0b98f5a46b7d87de6bc928cd8-merged.mount: Deactivated successfully.
Jan 22 10:42:59 np0005592157 podman[370849]: 2026-01-22 15:42:59.13713301 +0000 UTC m=+1.041061452 container remove 4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 10:42:59 np0005592157 systemd[1]: libpod-conmon-4370c32b045aabf9bc7bc785d4c2845527d54099b733b4a225b4356b71970763.scope: Deactivated successfully.
Jan 22 10:42:59 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:42:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:42:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:42:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.861045801 +0000 UTC m=+0.058132702 container create 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:42:59 np0005592157 systemd[1]: Started libpod-conmon-234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f.scope.
Jan 22 10:42:59 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.841865256 +0000 UTC m=+0.038952187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.959192454 +0000 UTC m=+0.156279385 container init 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.968213407 +0000 UTC m=+0.165300298 container start 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.971861677 +0000 UTC m=+0.168948628 container attach 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:42:59 np0005592157 romantic_gates[371044]: 167 167
Jan 22 10:42:59 np0005592157 systemd[1]: libpod-234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f.scope: Deactivated successfully.
Jan 22 10:42:59 np0005592157 podman[371027]: 2026-01-22 15:42:59.97599593 +0000 UTC m=+0.173082831 container died 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:43:00 np0005592157 systemd[1]: var-lib-containers-storage-overlay-c397cc65922e0ab0d1a1ff0cef39f7d9d58c7700fae46c3dd960c0e6bd6db3b8-merged.mount: Deactivated successfully.
Jan 22 10:43:00 np0005592157 podman[371027]: 2026-01-22 15:43:00.018041112 +0000 UTC m=+0.215128003 container remove 234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 10:43:00 np0005592157 systemd[1]: libpod-conmon-234283b7d8049a1b895e5b14876d9e685cae88cccd1225d14b2ab9840d37358f.scope: Deactivated successfully.
Jan 22 10:43:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:00 np0005592157 podman[371068]: 2026-01-22 15:43:00.205383295 +0000 UTC m=+0.051527108 container create 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:43:00 np0005592157 systemd[1]: Started libpod-conmon-84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef.scope.
Jan 22 10:43:00 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:43:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a8729a5048a17f75c98df58c52525d40ef229fac3cc28a442967daff8bbc4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:43:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a8729a5048a17f75c98df58c52525d40ef229fac3cc28a442967daff8bbc4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:43:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a8729a5048a17f75c98df58c52525d40ef229fac3cc28a442967daff8bbc4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:43:00 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a8729a5048a17f75c98df58c52525d40ef229fac3cc28a442967daff8bbc4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:43:00 np0005592157 podman[371068]: 2026-01-22 15:43:00.18379899 +0000 UTC m=+0.029942833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:43:00 np0005592157 podman[371068]: 2026-01-22 15:43:00.291802616 +0000 UTC m=+0.137946419 container init 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 10:43:00 np0005592157 podman[371068]: 2026-01-22 15:43:00.300631655 +0000 UTC m=+0.146775458 container start 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:43:00 np0005592157 podman[371068]: 2026-01-22 15:43:00.305049935 +0000 UTC m=+0.151193728 container attach 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:43:00 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:00 np0005592157 ceph-mon[74359]: Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]: {
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:        "osd_id": 0,
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:        "type": "bluestore"
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]:    }
Jan 22 10:43:01 np0005592157 wonderful_franklin[371083]: }
Jan 22 10:43:01 np0005592157 systemd[1]: libpod-84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef.scope: Deactivated successfully.
Jan 22 10:43:01 np0005592157 podman[371068]: 2026-01-22 15:43:01.261870428 +0000 UTC m=+1.108014251 container died 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:43:01 np0005592157 systemd[1]: var-lib-containers-storage-overlay-57a8729a5048a17f75c98df58c52525d40ef229fac3cc28a442967daff8bbc4d-merged.mount: Deactivated successfully.
Jan 22 10:43:01 np0005592157 podman[371068]: 2026-01-22 15:43:01.33458728 +0000 UTC m=+1.180731083 container remove 84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 10:43:01 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:01 np0005592157 systemd[1]: libpod-conmon-84f759c6c65de46054a7e91c5148d18206880545442838d728ae2ff6c0563aef.scope: Deactivated successfully.
Jan 22 10:43:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:43:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:01 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:43:01 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:01 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 245e6256-6a36-4427-a8b2-9925c3d7beef does not exist
Jan 22 10:43:01 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3a5a8e41-8af8-4523-84e1-0c150aa43420 does not exist
Jan 22 10:43:01 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 0f0f261a-1704-42e9-9a5f-0046b8714d88 does not exist
Jan 22 10:43:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:02 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:02 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:03 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:04 np0005592157 podman[371173]: 2026-01-22 15:43:04.414577043 +0000 UTC m=+0.142186265 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:43:04 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:05 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:43:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:43:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:05.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:06 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:07 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:08 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:09 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:10 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:10 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:11 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:13 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:14 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:15 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:17 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:17.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:18 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:19 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:20 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:20 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:21 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:21.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #258. Immutable memtables: 0.
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.643825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 163] Flushing memtable with next log file: 258
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602643875, "job": 163, "event": "flush_started", "num_memtables": 1, "num_entries": 2514, "num_deletes": 540, "total_data_size": 3382315, "memory_usage": 3436544, "flush_reason": "Manual Compaction"}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 163] Level-0 flush table #259: started
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602667529, "cf_name": "default", "job": 163, "event": "table_file_creation", "file_number": 259, "file_size": 3292028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115717, "largest_seqno": 118230, "table_properties": {"data_size": 3281741, "index_size": 5692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32521, "raw_average_key_size": 23, "raw_value_size": 3257045, "raw_average_value_size": 2344, "num_data_blocks": 239, "num_entries": 1389, "num_filter_entries": 1389, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096432, "oldest_key_time": 1769096432, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 259, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 163] Flush lasted 23788 microseconds, and 8448 cpu microseconds.
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.667609) [db/flush_job.cc:967] [default] [JOB 163] Level-0 flush table #259: 3292028 bytes OK
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.667631) [db/memtable_list.cc:519] [default] Level-0 commit table #259 started
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669687) [db/memtable_list.cc:722] [default] Level-0 commit table #259: memtable #1 done
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669703) EVENT_LOG_v1 {"time_micros": 1769096602669698, "job": 163, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 163] Try to delete WAL files size 3370609, prev total WAL file size 3370609, number of live WAL files 2.
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000255.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 164] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 163 Base level 0, inputs: [259(3214KB)], [257(9975KB)]
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602670656, "job": 164, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [259], "files_L6": [257], "score": -1, "input_data_size": 13507414, "oldest_snapshot_seqno": -1}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 164] Generated table #260: 14618 keys, 11671283 bytes, temperature: kUnknown
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602770599, "cf_name": "default", "job": 164, "event": "table_file_creation", "file_number": 260, "file_size": 11671283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11592256, "index_size": 41346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36613, "raw_key_size": 400495, "raw_average_key_size": 27, "raw_value_size": 11344907, "raw_average_value_size": 776, "num_data_blocks": 1495, "num_entries": 14618, "num_filter_entries": 14618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 260, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.771127) [db/compaction/compaction_job.cc:1663] [default] [JOB 164] Compacted 1@0 + 1@6 files to L6 => 11671283 bytes
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.773618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 15715, records dropped: 1097 output_compression: NoCompression
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.773650) EVENT_LOG_v1 {"time_micros": 1769096602773636, "job": 164, "event": "compaction_finished", "compaction_time_micros": 100060, "compaction_time_cpu_micros": 37308, "output_level": 6, "num_output_files": 1, "total_output_size": 11671283, "num_input_records": 15715, "num_output_records": 14618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000259.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602775121, "job": 164, "event": "table_file_deletion", "file_number": 259}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000257.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602778527, "job": 164, "event": "table_file_deletion", "file_number": 257}
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.778629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.778638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.778642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.778646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:22.778650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:23 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:23.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:24 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:25 np0005592157 podman[371260]: 2026-01-22 15:43:25.302637203 +0000 UTC m=+0.046953855 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:43:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:25 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:26 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:27.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:27 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:29 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:29.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:30 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:31 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:31.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:32 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:33 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:34 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:35 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:35 np0005592157 podman[371283]: 2026-01-22 15:43:35.355667248 +0000 UTC m=+0.090832252 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:43:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:36 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:37 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:37.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:38 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:39 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.010000248s ======
Jan 22 10:43:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.010000248s
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #261. Immutable memtables: 0.
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.128391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:856] [default] [JOB 165] Flushing memtable with next log file: 261
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620128975, "job": 165, "event": "flush_started", "num_memtables": 1, "num_entries": 477, "num_deletes": 287, "total_data_size": 302433, "memory_usage": 311464, "flush_reason": "Manual Compaction"}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:885] [default] [JOB 165] Level-0 flush table #262: started
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620134079, "cf_name": "default", "job": 165, "event": "table_file_creation", "file_number": 262, "file_size": 297704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 118231, "largest_seqno": 118707, "table_properties": {"data_size": 295140, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7222, "raw_average_key_size": 19, "raw_value_size": 289573, "raw_average_value_size": 774, "num_data_blocks": 23, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 287, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096603, "oldest_key_time": 1769096603, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 262, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 165] Flush lasted 5766 microseconds, and 2221 cpu microseconds.
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.134150) [db/flush_job.cc:967] [default] [JOB 165] Level-0 flush table #262: 297704 bytes OK
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.134172) [db/memtable_list.cc:519] [default] Level-0 commit table #262 started
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.135851) [db/memtable_list.cc:722] [default] Level-0 commit table #262: memtable #1 done
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.135872) EVENT_LOG_v1 {"time_micros": 1769096620135864, "job": 165, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.135894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 165] Try to delete WAL files size 299443, prev total WAL file size 299443, number of live WAL files 2.
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000258.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0036303432' seq:72057594037927935, type:22 .. '6C6F676D0036323937' seq:0, type:0; will stop at (end)
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 166] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 165 Base level 0, inputs: [262(290KB)], [260(11MB)]
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620136568, "job": 166, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [262], "files_L6": [260], "score": -1, "input_data_size": 11968987, "oldest_snapshot_seqno": -1}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 166] Generated table #263: 14409 keys, 11804784 bytes, temperature: kUnknown
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620239232, "cf_name": "default", "job": 166, "event": "table_file_creation", "file_number": 263, "file_size": 11804784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11726612, "index_size": 41067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 396996, "raw_average_key_size": 27, "raw_value_size": 11482420, "raw_average_value_size": 796, "num_data_blocks": 1479, "num_entries": 14409, "num_filter_entries": 14409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088714, "oldest_key_time": 0, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bb3b8fe-17e9-4c6f-9303-f02c31530e6c", "db_session_id": "0YQIT4DMC1LDOZT4JVHT", "orig_file_number": 263, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.239518) [db/compaction/compaction_job.cc:1663] [default] [JOB 166] Compacted 1@0 + 1@6 files to L6 => 11804784 bytes
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.241493) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.5 rd, 114.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(79.9) write-amplify(39.7) OK, records in: 14992, records dropped: 583 output_compression: NoCompression
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.241513) EVENT_LOG_v1 {"time_micros": 1769096620241504, "job": 166, "event": "compaction_finished", "compaction_time_micros": 102744, "compaction_time_cpu_micros": 54760, "output_level": 6, "num_output_files": 1, "total_output_size": 11804784, "num_input_records": 14992, "num_output_records": 14409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000262.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620241707, "job": 166, "event": "table_file_deletion", "file_number": 262}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000260.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620244308, "job": 166, "event": "table_file_deletion", "file_number": 260}
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.244419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.244428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.244432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.244435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: rocksdb: (Original Log Time 2026/01/22-15:43:40.244438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:40 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:41.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:42 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:43 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:43.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:44 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:45 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:45 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:46 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:43:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:43:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:43:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:43:47 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:43:47
Jan 22 10:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'default.rgw.control', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 22 10:43:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:43:47.737 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:43:47.738 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:43:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:43:47.738 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:43:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:48 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:50 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:51 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:51.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:52 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:53 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:54 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:55 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:43:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:43:56 np0005592157 podman[371370]: 2026-01-22 15:43:56.312596398 +0000 UTC m=+0.052993344 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 10:43:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:56 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:43:58 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:43:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:59 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:00 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:01 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ed8a9e5d-9137-42a3-ac13-3d58f8eac256 does not exist
Jan 22 10:44:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4f635b78-4b1c-475f-80f3-f49f6a114f9c does not exist
Jan 22 10:44:02 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev a917dc99-7ead-4e82-adcd-f5887009ee09 does not exist
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:44:02 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:44:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.491354779 +0000 UTC m=+0.033998613 container create b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:44:03 np0005592157 systemd[1]: Started libpod-conmon-b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2.scope.
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.476199514 +0000 UTC m=+0.018843368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.594864995 +0000 UTC m=+0.137508839 container init b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.602805051 +0000 UTC m=+0.145448895 container start b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.607444116 +0000 UTC m=+0.150087950 container attach b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:44:03 np0005592157 quirky_poincare[371733]: 167 167
Jan 22 10:44:03 np0005592157 systemd[1]: libpod-b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2.scope: Deactivated successfully.
Jan 22 10:44:03 np0005592157 conmon[371733]: conmon b05991b8b5ba7b37d266 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2.scope/container/memory.events
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.612994404 +0000 UTC m=+0.155638278 container died b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:44:03 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d9b99c7f55fbc233c1a42a133c4e90df48e28abea2b6023261f8926763595787-merged.mount: Deactivated successfully.
Jan 22 10:44:03 np0005592157 podman[371717]: 2026-01-22 15:44:03.659986469 +0000 UTC m=+0.202630303 container remove b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:44:03 np0005592157 systemd[1]: libpod-conmon-b05991b8b5ba7b37d266bcdd84b967f3aa986d854ad6755fb47d484953093be2.scope: Deactivated successfully.
Jan 22 10:44:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:03 np0005592157 podman[371754]: 2026-01-22 15:44:03.866766803 +0000 UTC m=+0.057424144 container create 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 10:44:03 np0005592157 systemd[1]: Started libpod-conmon-6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488.scope.
Jan 22 10:44:03 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:03 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:03 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:44:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:03 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:44:03 np0005592157 podman[371754]: 2026-01-22 15:44:03.94368808 +0000 UTC m=+0.134345471 container init 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:44:03 np0005592157 podman[371754]: 2026-01-22 15:44:03.850449589 +0000 UTC m=+0.041106950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:03 np0005592157 podman[371754]: 2026-01-22 15:44:03.951014141 +0000 UTC m=+0.141671482 container start 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 22 10:44:03 np0005592157 podman[371754]: 2026-01-22 15:44:03.954404315 +0000 UTC m=+0.145061666 container attach 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 10:44:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:04 np0005592157 happy_rosalind[371771]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:44:04 np0005592157 happy_rosalind[371771]: --> relative data size: 1.0
Jan 22 10:44:04 np0005592157 happy_rosalind[371771]: --> All data devices are unavailable
Jan 22 10:44:04 np0005592157 systemd[1]: libpod-6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488.scope: Deactivated successfully.
Jan 22 10:44:04 np0005592157 podman[371754]: 2026-01-22 15:44:04.763888407 +0000 UTC m=+0.954545788 container died 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:44:04 np0005592157 systemd[1]: var-lib-containers-storage-overlay-257c80ec26c402650d6011f033a2346a6481516be42854fe33efd222bb2ef390-merged.mount: Deactivated successfully.
Jan 22 10:44:04 np0005592157 podman[371754]: 2026-01-22 15:44:04.826081369 +0000 UTC m=+1.016738710 container remove 6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 10:44:04 np0005592157 systemd[1]: libpod-conmon-6ce5f7278048e3f8f3796104210b22da8178e0a76cc4c88da3f6dba46513f488.scope: Deactivated successfully.
Jan 22 10:44:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:05 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.451211801 +0000 UTC m=+0.039095300 container create 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:44:05 np0005592157 systemd[1]: Started libpod-conmon-6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db.scope.
Jan 22 10:44:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:44:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.527855331 +0000 UTC m=+0.115738860 container init 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.433557394 +0000 UTC m=+0.021440883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.534225909 +0000 UTC m=+0.122109368 container start 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:44:05 np0005592157 trusting_mayer[371958]: 167 167
Jan 22 10:44:05 np0005592157 systemd[1]: libpod-6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db.scope: Deactivated successfully.
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.539622632 +0000 UTC m=+0.127506171 container attach 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.540145025 +0000 UTC m=+0.128028524 container died 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:44:05 np0005592157 systemd[1]: var-lib-containers-storage-overlay-96491aea609b3b87851912c6d820378c047580d6567461cf87d46dc2a4ae57df-merged.mount: Deactivated successfully.
Jan 22 10:44:05 np0005592157 podman[371940]: 2026-01-22 15:44:05.588095884 +0000 UTC m=+0.175979353 container remove 6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:44:05 np0005592157 systemd[1]: libpod-conmon-6e2ecab76ce041dd8170735fb34501a4053adcbf67280371e680f556305f40db.scope: Deactivated successfully.
Jan 22 10:44:05 np0005592157 podman[371955]: 2026-01-22 15:44:05.626605488 +0000 UTC m=+0.128710991 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:44:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:05 np0005592157 podman[372007]: 2026-01-22 15:44:05.810235188 +0000 UTC m=+0.077814568 container create d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:44:05 np0005592157 systemd[1]: Started libpod-conmon-d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036.scope.
Jan 22 10:44:05 np0005592157 podman[372007]: 2026-01-22 15:44:05.77801347 +0000 UTC m=+0.045592910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:05 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a54e650af057e7047c9a6d29ab838166e7dfa5db3d256f6a46c32aac72ba4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a54e650af057e7047c9a6d29ab838166e7dfa5db3d256f6a46c32aac72ba4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a54e650af057e7047c9a6d29ab838166e7dfa5db3d256f6a46c32aac72ba4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:05 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a54e650af057e7047c9a6d29ab838166e7dfa5db3d256f6a46c32aac72ba4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:05 np0005592157 podman[372007]: 2026-01-22 15:44:05.911354024 +0000 UTC m=+0.178933454 container init d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 22 10:44:05 np0005592157 podman[372007]: 2026-01-22 15:44:05.926984792 +0000 UTC m=+0.194564172 container start d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:44:05 np0005592157 podman[372007]: 2026-01-22 15:44:05.931189756 +0000 UTC m=+0.198769126 container attach d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:44:06 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:06 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:06 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]: {
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:    "0": [
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:        {
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "devices": [
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "/dev/loop3"
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            ],
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "lv_name": "ceph_lv0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "lv_size": "7511998464",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "name": "ceph_lv0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "tags": {
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.cluster_name": "ceph",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.crush_device_class": "",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.encrypted": "0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.osd_id": "0",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.type": "block",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:                "ceph.vdo": "0"
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            },
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "type": "block",
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:            "vg_name": "ceph_vg0"
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:        }
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]:    ]
Jan 22 10:44:06 np0005592157 modest_northcutt[372024]: }
Jan 22 10:44:06 np0005592157 systemd[1]: libpod-d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036.scope: Deactivated successfully.
Jan 22 10:44:06 np0005592157 podman[372007]: 2026-01-22 15:44:06.723685627 +0000 UTC m=+0.991265007 container died d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:44:06 np0005592157 systemd[1]: var-lib-containers-storage-overlay-e0a54e650af057e7047c9a6d29ab838166e7dfa5db3d256f6a46c32aac72ba4a-merged.mount: Deactivated successfully.
Jan 22 10:44:06 np0005592157 podman[372007]: 2026-01-22 15:44:06.830300469 +0000 UTC m=+1.097879829 container remove d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 10:44:06 np0005592157 systemd[1]: libpod-conmon-d52736859f810d24398184e7da816179b919482e9abb6e05501eb6bfaa8e8036.scope: Deactivated successfully.
Jan 22 10:44:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:07 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.514083575 +0000 UTC m=+0.042553165 container create ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:44:07 np0005592157 systemd[1]: Started libpod-conmon-ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172.scope.
Jan 22 10:44:07 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.493025554 +0000 UTC m=+0.021495174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.617757265 +0000 UTC m=+0.146226935 container init ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.624067391 +0000 UTC m=+0.152536981 container start ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:44:07 np0005592157 lucid_jennings[372203]: 167 167
Jan 22 10:44:07 np0005592157 systemd[1]: libpod-ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172.scope: Deactivated successfully.
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.690129768 +0000 UTC m=+0.218599378 container attach ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.691585225 +0000 UTC m=+0.220054805 container died ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:44:07 np0005592157 systemd[1]: var-lib-containers-storage-overlay-336c2b9061236778d6291598d1a41c0710e80efa7e52824665629b7b795b503c-merged.mount: Deactivated successfully.
Jan 22 10:44:07 np0005592157 podman[372187]: 2026-01-22 15:44:07.73336088 +0000 UTC m=+0.261830450 container remove ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 10:44:07 np0005592157 systemd[1]: libpod-conmon-ad46ba48459068769898168a76f3dc7d09214c9e799318459c9b00b09635c172.scope: Deactivated successfully.
Jan 22 10:44:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:07 np0005592157 podman[372230]: 2026-01-22 15:44:07.953497786 +0000 UTC m=+0.053006985 container create fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 10:44:07 np0005592157 systemd[1]: Started libpod-conmon-fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d.scope.
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:07.926923757 +0000 UTC m=+0.026432946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:44:08 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:44:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cc04759b244b4345d07c49969f6a1c3c87ed8b00ea2a4ef2f45f98b792a6f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cc04759b244b4345d07c49969f6a1c3c87ed8b00ea2a4ef2f45f98b792a6f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cc04759b244b4345d07c49969f6a1c3c87ed8b00ea2a4ef2f45f98b792a6f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:08 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cc04759b244b4345d07c49969f6a1c3c87ed8b00ea2a4ef2f45f98b792a6f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:08.055102884 +0000 UTC m=+0.154612083 container init fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:08.069636954 +0000 UTC m=+0.169146163 container start fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:08.074454953 +0000 UTC m=+0.173964162 container attach fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 10:44:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:08 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]: {
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:        "osd_id": 0,
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:        "type": "bluestore"
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]:    }
Jan 22 10:44:08 np0005592157 elated_dubinsky[372248]: }
Jan 22 10:44:08 np0005592157 systemd[1]: libpod-fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d.scope: Deactivated successfully.
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:08.924134361 +0000 UTC m=+1.023643540 container died fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:44:08 np0005592157 systemd[1]: var-lib-containers-storage-overlay-13cc04759b244b4345d07c49969f6a1c3c87ed8b00ea2a4ef2f45f98b792a6f6-merged.mount: Deactivated successfully.
Jan 22 10:44:08 np0005592157 podman[372230]: 2026-01-22 15:44:08.978807946 +0000 UTC m=+1.078317115 container remove fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:44:08 np0005592157 systemd[1]: libpod-conmon-fe269b4e2b05f751558f969a03dbc796def756ca56149fc17403ddbb45385d5d.scope: Deactivated successfully.
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev d0083ea3-2649-4165-8f01-92403d9c22b8 does not exist
Jan 22 10:44:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev bef042bd-1b88-43bf-a890-f5ff4c4fcd13 does not exist
Jan 22 10:44:09 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 3a6ef2b4-b882-4b66-95fd-4e1e103cd303 does not exist
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:10 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:10 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:11 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:12 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:13.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:13 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:14 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:15 np0005592157 ceph-mon[74359]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:15 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:16 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:17 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:44:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:44:18 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:44:18 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:44:18 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:20 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:21 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592157 ceph-mon[74359]: Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:22 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:23 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:25 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:26 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:26 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:27 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:27.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:27 np0005592157 podman[372392]: 2026-01-22 15:44:27.347213794 +0000 UTC m=+0.080111317 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:44:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:28 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:29.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:29 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:30 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:31.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:44:31 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 19K writes, 57K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 19K writes, 6589 syncs, 2.90 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 760 writes, 1297 keys, 760 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 760 writes, 362 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:44:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:32 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:33 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:33.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:34 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:35 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:35.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:36 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:36 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:36 np0005592157 podman[372419]: 2026-01-22 15:44:36.43427165 +0000 UTC m=+0.158746175 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:44:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:37 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:37.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:38 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:38 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:39 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:40 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:40 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:41 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:42 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:43 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:44 np0005592157 ceph-mgr[74655]: [devicehealth INFO root] Check health
Jan 22 10:44:44 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:45 np0005592157 ceph-mon[74359]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:44:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:44:47 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:44:47
Jan 22 10:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta']
Jan 22 10:44:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:44:47.738 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:44:47.738 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:44:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:44:47.738 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:44:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:48 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:49 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:50 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:50 np0005592157 ceph-mon[74359]: Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:51.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:51 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:52 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:44:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:53.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:44:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:44:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:53.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:44:53 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:54 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:54 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:55 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:55 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:55 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:44:55 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:55.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:44:55 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592157 ceph-mon[74359]: Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:56 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:56 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:57.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:57 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:57 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:57 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:57.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:57 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:58 np0005592157 podman[372506]: 2026-01-22 15:44:58.328040456 +0000 UTC m=+0.057624079 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:44:58 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:44:58 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:44:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:44:59 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:44:59 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:59 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:59.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:59 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:00 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:00 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:00 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:00 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:00 np0005592157 ceph-mon[74359]: Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:45:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:45:01 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:01 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:01 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:02 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:02 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:03 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:03 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:03 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:04 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:04 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:05 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:05 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:05 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592157 ceph-mon[74359]: Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014318554578447794 of space, bias 1.0, pg target 4.295566373534339 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003557282942490229 of space, bias 1.0, pg target 1.0529557509771077 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006939355341522427 of space, bias 1.0, pg target 0.2047109825749116 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 10:45:05 np0005592157 ceph-mgr[74655]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 22 10:45:05 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:05 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:05 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:06 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:06 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:07 np0005592157 podman[372579]: 2026-01-22 15:45:07.339049427 +0000 UTC m=+0.078013295 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:45:07 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:07 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:08 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:08 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:09 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:09 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:09 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 5085f9ac-b51e-45bd-aaa5-e3d399910ff0 does not exist
Jan 22 10:45:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev db1905bb-b3a7-4716-a663-a299ea7f16bb does not exist
Jan 22 10:45:10 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev f1302156-5046-48eb-b570-ae84d9cf7b49 does not exist
Jan 22 10:45:10 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:10 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.013645155 +0000 UTC m=+0.037819499 container create 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:45:11 np0005592157 systemd[1]: Started libpod-conmon-8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820.scope.
Jan 22 10:45:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:10.996835408 +0000 UTC m=+0.021009762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.093326669 +0000 UTC m=+0.117501023 container init 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.099563694 +0000 UTC m=+0.123738028 container start 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.102867536 +0000 UTC m=+0.127041910 container attach 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:45:11 np0005592157 naughty_tharp[372896]: 167 167
Jan 22 10:45:11 np0005592157 systemd[1]: libpod-8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820.scope: Deactivated successfully.
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.105985233 +0000 UTC m=+0.130159627 container died 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:45:11 np0005592157 systemd[1]: var-lib-containers-storage-overlay-d067f43bf4065f1524b398ded34b0476dd4225f82c4744b268baf72665b554cc-merged.mount: Deactivated successfully.
Jan 22 10:45:11 np0005592157 podman[372880]: 2026-01-22 15:45:11.150536937 +0000 UTC m=+0.174711271 container remove 8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:45:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:11.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:11 np0005592157 systemd[1]: libpod-conmon-8012308d5a9cc2fe357466214309e19b040bd6195c4cadf6fea8f9996f676820.scope: Deactivated successfully.
Jan 22 10:45:11 np0005592157 podman[372921]: 2026-01-22 15:45:11.343085119 +0000 UTC m=+0.039307245 container create 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:45:11 np0005592157 systemd[1]: Started libpod-conmon-2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239.scope.
Jan 22 10:45:11 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:11 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:11 np0005592157 podman[372921]: 2026-01-22 15:45:11.417876523 +0000 UTC m=+0.114098689 container init 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:11 np0005592157 podman[372921]: 2026-01-22 15:45:11.324900259 +0000 UTC m=+0.021122425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:11 np0005592157 podman[372921]: 2026-01-22 15:45:11.429977843 +0000 UTC m=+0.126199979 container start 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 10:45:11 np0005592157 podman[372921]: 2026-01-22 15:45:11.443470007 +0000 UTC m=+0.139692133 container attach 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:45:11 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:11 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:11 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:12 np0005592157 competent_hofstadter[372938]: --> passed data devices: 0 physical, 1 LVM
Jan 22 10:45:12 np0005592157 competent_hofstadter[372938]: --> relative data size: 1.0
Jan 22 10:45:12 np0005592157 competent_hofstadter[372938]: --> All data devices are unavailable
Jan 22 10:45:12 np0005592157 systemd[1]: libpod-2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239.scope: Deactivated successfully.
Jan 22 10:45:12 np0005592157 podman[372921]: 2026-01-22 15:45:12.229295053 +0000 UTC m=+0.925517339 container died 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 10:45:12 np0005592157 systemd[1]: var-lib-containers-storage-overlay-77ce0225f32c644936d938a4209a22d5e77a99b043919bf5a2372cfedf0a7ff1-merged.mount: Deactivated successfully.
Jan 22 10:45:12 np0005592157 podman[372921]: 2026-01-22 15:45:12.286367017 +0000 UTC m=+0.982589163 container remove 2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:12 np0005592157 systemd[1]: libpod-conmon-2395ebe56139ef582db11b9324c6b68050d3923226fcd47a7cdab978e08bf239.scope: Deactivated successfully.
Jan 22 10:45:12 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:12 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.900893597 +0000 UTC m=+0.038553766 container create bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:12 np0005592157 systemd[1]: Started libpod-conmon-bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7.scope.
Jan 22 10:45:12 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.972209175 +0000 UTC m=+0.109869374 container init bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.97808124 +0000 UTC m=+0.115741399 container start bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.886512431 +0000 UTC m=+0.024172620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.981500245 +0000 UTC m=+0.119160434 container attach bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:45:12 np0005592157 bold_mccarthy[373122]: 167 167
Jan 22 10:45:12 np0005592157 systemd[1]: libpod-bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7.scope: Deactivated successfully.
Jan 22 10:45:12 np0005592157 conmon[373122]: conmon bd583b4be4930db2d2d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7.scope/container/memory.events
Jan 22 10:45:12 np0005592157 podman[373106]: 2026-01-22 15:45:12.98454029 +0000 UTC m=+0.122200459 container died bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:45:13 np0005592157 systemd[1]: var-lib-containers-storage-overlay-1f545900d75ce6edf4234aac610f5ec4ca29112ec7a3aec97db9ce781ebe944b-merged.mount: Deactivated successfully.
Jan 22 10:45:13 np0005592157 podman[373106]: 2026-01-22 15:45:13.017274862 +0000 UTC m=+0.154935031 container remove bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:45:13 np0005592157 systemd[1]: libpod-conmon-bd583b4be4930db2d2d23faab9653f622a75b55c5831895b76fec3e8e53d34a7.scope: Deactivated successfully.
Jan 22 10:45:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:13.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:13 np0005592157 podman[373146]: 2026-01-22 15:45:13.170883818 +0000 UTC m=+0.040229068 container create c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:45:13 np0005592157 systemd[1]: Started libpod-conmon-c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe.scope.
Jan 22 10:45:13 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950202b3a1a796236cefa13d0c7683bd5272e488a68fdbdd9bf66549d276e334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950202b3a1a796236cefa13d0c7683bd5272e488a68fdbdd9bf66549d276e334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950202b3a1a796236cefa13d0c7683bd5272e488a68fdbdd9bf66549d276e334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:13 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950202b3a1a796236cefa13d0c7683bd5272e488a68fdbdd9bf66549d276e334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:13 np0005592157 podman[373146]: 2026-01-22 15:45:13.152210536 +0000 UTC m=+0.021555806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:13 np0005592157 podman[373146]: 2026-01-22 15:45:13.249971199 +0000 UTC m=+0.119316469 container init c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:45:13 np0005592157 podman[373146]: 2026-01-22 15:45:13.256516051 +0000 UTC m=+0.125861301 container start c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:45:13 np0005592157 podman[373146]: 2026-01-22 15:45:13.259766851 +0000 UTC m=+0.129112131 container attach c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:45:13 np0005592157 ceph-mon[74359]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:13 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:13 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:13 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]: {
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:    "0": [
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:        {
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "devices": [
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:                "/dev/loop3"
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            ],
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "lv_name": "ceph_lv0",
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "lv_size": "7511998464",
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf8012c-a884-4617-89df-833bc5f19dbf,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "lv_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:45:13 np0005592157 nifty_lehmann[373162]:            "name": "ceph_lv0",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:            "tags": {
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.block_uuid": "km3QT9-51HO-9zKg-Hx6x-Yb1v-m3Rh-GNuDtv",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.cephx_lockbox_secret": "",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.cluster_name": "ceph",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.crush_device_class": "",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.encrypted": "0",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.osd_fsid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.osd_id": "0",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.type": "block",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:                "ceph.vdo": "0"
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:            },
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:            "type": "block",
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:            "vg_name": "ceph_vg0"
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:        }
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]:    ]
Jan 22 10:45:14 np0005592157 nifty_lehmann[373162]: }
Jan 22 10:45:14 np0005592157 systemd[1]: libpod-c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe.scope: Deactivated successfully.
Jan 22 10:45:14 np0005592157 podman[373146]: 2026-01-22 15:45:14.032116892 +0000 UTC m=+0.901462162 container died c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 10:45:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-950202b3a1a796236cefa13d0c7683bd5272e488a68fdbdd9bf66549d276e334-merged.mount: Deactivated successfully.
Jan 22 10:45:14 np0005592157 podman[373146]: 2026-01-22 15:45:14.085641868 +0000 UTC m=+0.954987118 container remove c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:14 np0005592157 systemd[1]: libpod-conmon-c0e20f8b8f447d1cb623de3835bb3cae3df10cf342f418e20932b64f513565fe.scope: Deactivated successfully.
Jan 22 10:45:14 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:14 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.738863597 +0000 UTC m=+0.052199694 container create fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:45:14 np0005592157 systemd[1]: Started libpod-conmon-fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9.scope.
Jan 22 10:45:14 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.713753685 +0000 UTC m=+0.027089782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.822117321 +0000 UTC m=+0.135453418 container init fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.831288848 +0000 UTC m=+0.144624935 container start fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.835539123 +0000 UTC m=+0.148875220 container attach fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 10:45:14 np0005592157 systemd[1]: libpod-fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9.scope: Deactivated successfully.
Jan 22 10:45:14 np0005592157 naughty_johnson[373344]: 167 167
Jan 22 10:45:14 np0005592157 conmon[373344]: conmon fa578dd57cb30dbc437e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9.scope/container/memory.events
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.83861852 +0000 UTC m=+0.151954577 container died fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:45:14 np0005592157 systemd[1]: var-lib-containers-storage-overlay-7e590d116085fcb446b756690ac6067309c8ad2eae6e2ad7563566ce9ba93fa8-merged.mount: Deactivated successfully.
Jan 22 10:45:14 np0005592157 podman[373327]: 2026-01-22 15:45:14.888779353 +0000 UTC m=+0.202115440 container remove fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_johnson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:45:14 np0005592157 systemd[1]: libpod-conmon-fa578dd57cb30dbc437e427cf28039340eb57583bcf3dc50ec0833385f8b26a9.scope: Deactivated successfully.
Jan 22 10:45:15 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:15 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:15 np0005592157 podman[373368]: 2026-01-22 15:45:15.149790892 +0000 UTC m=+0.084556787 container create c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:45:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:15.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:15 np0005592157 systemd[1]: Started libpod-conmon-c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b.scope.
Jan 22 10:45:15 np0005592157 podman[373368]: 2026-01-22 15:45:15.10694082 +0000 UTC m=+0.041706715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:45:15 np0005592157 systemd[1]: Started libcrun container.
Jan 22 10:45:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffd9e6ae18fe7d091455b81ab54cf5d58d2b665a8321b158bf4c4cd481cc874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffd9e6ae18fe7d091455b81ab54cf5d58d2b665a8321b158bf4c4cd481cc874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffd9e6ae18fe7d091455b81ab54cf5d58d2b665a8321b158bf4c4cd481cc874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:15 np0005592157 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffd9e6ae18fe7d091455b81ab54cf5d58d2b665a8321b158bf4c4cd481cc874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:45:15 np0005592157 podman[373368]: 2026-01-22 15:45:15.261776247 +0000 UTC m=+0.196542152 container init c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:45:15 np0005592157 podman[373368]: 2026-01-22 15:45:15.274771659 +0000 UTC m=+0.209537554 container start c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:45:15 np0005592157 podman[373368]: 2026-01-22 15:45:15.279593459 +0000 UTC m=+0.214359364 container attach c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 10:45:15 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592157 ceph-mon[74359]: Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:15 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:15 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:15 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]: {
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:    "dbf8012c-a884-4617-89df-833bc5f19dbf": {
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:        "osd_id": 0,
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:        "osd_uuid": "dbf8012c-a884-4617-89df-833bc5f19dbf",
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:        "type": "bluestore"
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]:    }
Jan 22 10:45:16 np0005592157 charming_blackburn[373384]: }
Jan 22 10:45:16 np0005592157 systemd[1]: libpod-c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b.scope: Deactivated successfully.
Jan 22 10:45:16 np0005592157 podman[373368]: 2026-01-22 15:45:16.09350005 +0000 UTC m=+1.028265935 container died c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:45:16 np0005592157 systemd[1]: var-lib-containers-storage-overlay-fffd9e6ae18fe7d091455b81ab54cf5d58d2b665a8321b158bf4c4cd481cc874-merged.mount: Deactivated successfully.
Jan 22 10:45:16 np0005592157 podman[373368]: 2026-01-22 15:45:16.170844897 +0000 UTC m=+1.105610762 container remove c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 10:45:16 np0005592157 systemd[1]: libpod-conmon-c63035432c9435c9efa87e6ff79f9791ad166af0ad53b7720412c97cabbe287b.scope: Deactivated successfully.
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 4e154e20-5b73-4f7f-bcd0-c6147de36b62 does not exist
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev 79c2431a-cfab-4304-8a03-74f65cd4f49a does not exist
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [progress WARNING root] complete: ev ec26d757-d477-4ac2-8d5d-8ff7bc307cf0 does not exist
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:16 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:16 np0005592157 ceph-mon[74359]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:17.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:17 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:17 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:17 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:17.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:17 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:18 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:18 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:19.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:19 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:19 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:19 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:20 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:20 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:20 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:20 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:21 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:21 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:21 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:45:21 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:21.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:45:22 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:22 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:23.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:23 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:23 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:23 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:23.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:24 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:24 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:25 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:25 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:25 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:25 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:25 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:26 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:26 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:27 np0005592157 systemd-logind[785]: New session 52 of user zuul.
Jan 22 10:45:27 np0005592157 systemd[1]: Started Session 52 of User zuul.
Jan 22 10:45:27 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:27 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:27 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:27.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:28 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:28 np0005592157 podman[373562]: 2026-01-22 15:45:28.599515847 +0000 UTC m=+0.061880675 container health_status 48e016f561e03970036edbc5297b0e0e77dab14176f2b4da5eec6702d9b3e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:45:28 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:28 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:29 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:29 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:29 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:30 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:30 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:30 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:30 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:30 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18522 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:30 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:30 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:30 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:31 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:31 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 10:45:31 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055205003' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 10:45:31 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:31 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:31 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:31 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:31.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:32 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28636 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:32 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:32 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28642 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:33 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 10:45:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 10:45:33 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:33 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:33 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:34 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:34 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:34 np0005592157 ovs-vsctl[373834]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 10:45:35 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:35 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:35.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:35 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:35 np0005592157 virtqemud[245202]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 10:45:35 np0005592157 virtqemud[245202]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 10:45:35 np0005592157 virtqemud[245202]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 10:45:35 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:35 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:35 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:35.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:35 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27464 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:36 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:36 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27476 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:36 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 10:45:36 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 10:45:36 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:36 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: cache status {prefix=cache status} (starting...)
Jan 22 10:45:36 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592157 lvm[374139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 10:45:36 np0005592157 lvm[374139]: VG ceph_vg0 finished
Jan 22 10:45:36 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: client ls {prefix=client ls} (starting...)
Jan 22 10:45:36 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18543 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27494 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:37 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:37.121+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:37 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 10:45:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475687693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:45:37 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333500352' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:37 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:37 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:37 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27527 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 10:45:37 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934319290' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18579 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:38 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:38.183+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27533 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:38 np0005592157 podman[374406]: 2026-01-22 15:45:38.3756183 +0000 UTC m=+0.104064880 container health_status 8c39a8ec6608be9968a127fd90b5e616d843a568dcc2c222041cb11e30e06786 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8-2d6828621637a23875b5f9b411d42b1a9d0594482b5c64a53ba78438eaeb31b8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: ops {prefix=ops} (starting...)
Jan 22 10:45:38 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234990876' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347329438' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 10:45:38 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28666 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:38 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129939776' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: session ls {prefix=session ls} (starting...)
Jan 22 10:45:39 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst Can't run that command on an inactive MDS!
Jan 22 10:45:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:45:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:45:39 np0005592157 ceph-mds[91998]: mds.cephfs.compute-0.zjixst asok_command: status {prefix=status} (starting...)
Jan 22 10:45:39 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28672 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18618 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162337217' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27569 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:39 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:39.527+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747367764' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 10:45:39 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:39 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:39 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:39 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040615603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628751414' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603627911' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:40 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:40 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:40.543+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136528779' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28705 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:40 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:40 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:40.856+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 10:45:40 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938522680' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865448621' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261441280' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28747 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18708 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:41 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:41 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:41.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:41 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27647 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:41 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047798707' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28771 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2917217955' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27677 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2735252 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 37650432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 117.779151917s of 117.827247620s, submitted: 17
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 37634048 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7f42c00 session 0x55ede9140960
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2736749 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2736749 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105000 session 0x55ede6673860
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 37642240 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105400 session 0x55ede6af0d20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 37617664 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 37617664 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 37601280 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 37601280 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2736157 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 37601280 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 37601280 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede58cc800 session 0x55ede88b5680
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede54c0400 session 0x55ede911d2c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13ab000/0x0/0x1bfc00000, data 0xb676f8f/0xa6b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.318325043s of 18.404104233s, submitted: 22
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7f42c00 session 0x55ede877d680
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2739413 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 35897344 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105000 session 0x55ede5d90b40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b13aa000/0x0/0x1bfc00000, data 0xb676ff1/0xa6b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105400 session 0x55ede6af05a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2758293 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1221000/0x0/0x1bfc00000, data 0xb800f8f/0xa83d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b119e000/0x0/0x1bfc00000, data 0xb883f8f/0xa8c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2758293 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b119e000/0x0/0x1bfc00000, data 0xb883f8f/0xa8c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b119e000/0x0/0x1bfc00000, data 0xb883f8f/0xa8c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 138985472 unmapped: 36323328 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.199616432s of 13.345398903s, submitted: 33
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea173000 session 0x55ede86b2960
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede54c0400 session 0x55ede5de01e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36306944 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 16K writes, 53K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5472 syncs, 3.06 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 920 writes, 2022 keys, 920 commit groups, 1.0 writes per commit group, ingest: 0.68 MB, 0.00 MB/s#012Interval WAL: 920 writes, 435 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ede4354f30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36282368 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36282368 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36282368 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36282368 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36282368 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139034624 unmapped: 36274176 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139034624 unmapped: 36274176 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 36265984 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea224c00 session 0x55ede5d901e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea224400 session 0x55ede79dab40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea225800 session 0x55ede6a841e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede5d1c800 session 0x55ede5d91c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 36257792 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139059200 unmapped: 36249600 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139059200 unmapped: 36249600 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139059200 unmapped: 36249600 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 36233216 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105400 session 0x55ede5d781e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 36233216 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea172c00 session 0x55ede9140000
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 141885440 unmapped: 33423360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 141885440 unmapped: 33423360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 155.975296021s of 156.052459717s, submitted: 23
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 33341440 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 33333248 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 141983744 unmapped: 33325056 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 32260096 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,3])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 32251904 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143179776 unmapped: 32129024 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143212544 unmapped: 32096256 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143212544 unmapped: 32096256 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143302656 unmapped: 32006144 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143335424 unmapped: 31973376 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.940376759s of 10.002292633s, submitted: 359
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x419f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143343616 unmapped: 31965184 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 31948800 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 31940608 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 31940608 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 31940608 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 31940608 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 31932416 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 31924224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 31916032 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea705800 session 0x55ede91414a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 31907840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [1])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 31899648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 31891456 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea651800 session 0x55ede589b860
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede912fc00 session 0x55ede5c1e000
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea650800 session 0x55ede5eb0d20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edeac6a000 session 0x55ede78a5c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede746f000 session 0x55ede840fa40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede74a4800 session 0x55ede5dd21e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede746f400 session 0x55ede59b63c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9168c00 session 0x55ede91412c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea651c00 session 0x55ede870c1e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede6990000 session 0x55ede6af1860
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9103000 session 0x55ede59b7c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 31883264 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7e19800 session 0x55ede5de0b40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 30965760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2670075 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 312.754547119s of 313.264434814s, submitted: 53
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 30965760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7e19800 session 0x55ede884b4a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede54c0400 session 0x55ede59b6f00
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 30965760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 30965760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ad000/0x0/0x1bfc00000, data 0xac64f8f/0x9ca1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 30965760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9caf000 session 0x55ede877d0e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105400 session 0x55ede911c960
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144384000 unmapped: 30924800 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 30900224 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144433152 unmapped: 30875648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144474112 unmapped: 30834688 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 30818304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144498688 unmapped: 30810112 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 30801920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 30793728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede93bdc00 session 0x55ede6a85a40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede6970c00 session 0x55ede438f2c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 27934720 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ae000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 27934720 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 102.814758301s of 107.173950195s, submitted: 113
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede54c0400 session 0x55ede91425a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 27934720 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 27934720 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 27934720 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede893a800 session 0x55ede7e990e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672636 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7e19800 session 0x55ede5de1a40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b19ad000/0x0/0x1bfc00000, data 0xac64f7f/0x9ca0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 27648000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 17K writes, 54K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 5909 syncs, 2.99 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 975 writes, 1485 keys, 975 commit groups, 1.0 writes per commit group, ingest: 0.48 MB, 0.00 MB/s#012Interval WAL: 975 writes, 437 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 27639808 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 27631616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 27893760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2704906 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 169.736587524s of 169.926528931s, submitted: 13
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 27885568 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148529152 unmapped: 26779648 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 26697728 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148652032 unmapped: 26656768 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148652032 unmapped: 26656768 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148652032 unmapped: 26656768 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 26648576 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 26640384 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 26624000 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 26607616 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148709376 unmapped: 26599424 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 26591232 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 26583040 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede7f42c00 session 0x55ede91421e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea224c00 session 0x55ede884a000
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede9105000 session 0x55ede5d794a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea225c00 session 0x55ede5d5a3c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea225800 session 0x55ede78a54a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 26574848 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea224000 session 0x55ede877d860
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea650400 session 0x55ede5cb4960
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55edea172000 session 0x55ede86b3860
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 26525696 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede959d400 session 0x55ede78a5c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede93ba000 session 0x55ede5de14a0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 26869760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2703850 data_alloc: 218103808 data_used: 27373568
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:42 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148250249' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 26869760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede54c0400 session 0x55ede5d57c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 26869760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 ms_handle_reset con 0x55ede893bc00 session 0x55ede840ef00
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 26869760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 437.681121826s of 438.700225830s, submitted: 312
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1665000/0x0/0x1bfc00000, data 0xafacf8f/0x9fe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 26869760 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede893bc00 session 0x55ede5eb1c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 29802496 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede893a800 session 0x55ede5d56780
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1661000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2764790 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 29802496 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 29802496 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 29802496 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edea651800 session 0x55ede9143c20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede912fc00 session 0x55ede911cb40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2764790 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2764790 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 18K writes, 55K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 18K writes, 6227 syncs, 2.95 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 647 writes, 1071 keys, 647 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 647 writes, 318 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145514496 unmapped: 29794304 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.255049706s of 14.324095726s, submitted: 17
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b0fcb000/0x0/0x1bfc00000, data 0xb645ad6/0xa683000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede9598400 session 0x55ede8ed2960
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 29777920 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: mgrc ms_handle_reset ms_handle_reset con 0x55edea651400
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1334415348
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1334415348,v1:192.168.122.100:6801/1334415348]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: mgrc handle_mgr_configure stats_period=5
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edea650c00 session 0x55ede88d92c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edeac6a000 session 0x55ede59b61e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 29638656 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 29630464 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edea651000 session 0x55ede5d561e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede74a4800 session 0x55ede8f10b40
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edea704800 session 0x55ede788ef00
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede9169c00 session 0x55ede6650d20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 29622272 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 143.473236084s of 143.505386353s, submitted: 7
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145719296 unmapped: 29589504 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145719296 unmapped: 29589504 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145719296 unmapped: 29589504 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 29573120 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2709158 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [1,0,0,0,0,0,1])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 29573120 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 29564928 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55edea651c00 session 0x55ede86b23c0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145825792 unmapped: 29483008 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145850368 unmapped: 29458432 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 ms_handle_reset con 0x55ede9103000 session 0x55ede911c000
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 29384704 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145932288 unmapped: 29376512 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145940480 unmapped: 29368320 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2708984 data_alloc: 218103808 data_used: 27381760
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145948672 unmapped: 29360128 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145948672 unmapped: 29360128 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b1662000/0x0/0x1bfc00000, data 0xafaead6/0x9fec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.081977844s of 61.919181824s, submitted: 356
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145956864 unmapped: 29351936 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145956864 unmapped: 29351936 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145956864 unmapped: 29351936 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 ms_handle_reset con 0x55ede93c0800 session 0x55ede8ed2000
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2716788 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145965056 unmapped: 29343744 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 heartbeat osd_stat(store_statfs(0x1b165c000/0x0/0x1bfc00000, data 0xafb077f/0x9ff1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 ms_handle_reset con 0x55ede912f400 session 0x55ede8697e00
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 ms_handle_reset con 0x55edea224800 session 0x55ede9140d20
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 181 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 182 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb2426/0x9ff3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 182 ms_handle_reset con 0x55ede93c1800 session 0x55ede5304780
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2717057 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 182 heartbeat osd_stat(store_statfs(0x1b165b000/0x0/0x1bfc00000, data 0xafb2416/0x9ff2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 182 heartbeat osd_stat(store_statfs(0x1b165b000/0x0/0x1bfc00000, data 0xafb2416/0x9ff2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.823017120s of 10.981437683s, submitted: 23
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b165b000/0x0/0x1bfc00000, data 0xafb2416/0x9ff2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 29335552 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 19K writes, 57K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 19K writes, 6589 syncs, 2.90 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 760 writes, 1297 keys, 760 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 760 writes, 362 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 ms_handle_reset con 0x55ede93bc800 session 0x55ede5d901e0
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145981440 unmapped: 29327360 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 29319168 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 29319168 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: bluestore.MempoolThread(0x55ede4433b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2720031 data_alloc: 218103808 data_used: 27389952
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 29319168 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 146178048 unmapped: 29130752 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'config diff' '{prefix=config diff}'
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'config show' '{prefix=config show}'
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 29032448 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: osd.0 183 heartbeat osd_stat(store_statfs(0x1b1658000/0x0/0x1bfc00000, data 0xafb3f72/0x9ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x45af9c7), peers [1,2] op hist [])
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 146178048 unmapped: 29130752 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: prioritycache tune_memory target: 4294967296 mapped: 146472960 unmapped: 28835840 heap: 175308800 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:42 np0005592157 ceph-osd[84809]: do_command 'log dump' '{prefix=log dump}'
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27695 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:43 np0005592157 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:45:43 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:43 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731501614' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18774 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28819 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:43 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:43.439+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 10:45:43 np0005592157 ceph-mon[74359]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27707 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18789 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:43 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27731 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:43 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:43 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:43 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:43.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28843 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18795 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27743 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 10:45:44 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151730726' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28858 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:44 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:44 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27764 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:45 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:45.015+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28897 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074599857' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 10:45:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18837 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:45.322+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598294737' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28912 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308796185' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 10:45:45 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321101815' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 10:45:45 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28924 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:45 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:45 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:45 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:45.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449513707' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28936 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364877196' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837579571' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 10:45:46 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28951 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 10:45:46 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/892894112' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477755921' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700747628' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 10:45:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28969 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260425190' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536748687' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 10:45:47 np0005592157 systemd[1]: Starting Hostname Service...
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Optimize plan auto_2026-01-22_15:45:47
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] do_upmap
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: [balancer INFO root] prepared 0/10 changes
Jan 22 10:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:45:47.739 157426 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:45:47.740 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:45:47 np0005592157 ovn_metadata_agent[157421]: 2026-01-22 15:45:47.740 157426 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:45:47 np0005592157 systemd[1]: Started Hostname Service.
Jan 22 10:45:47 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27863 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:47 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:47 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:47 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:47.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452673441' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 10:45:47 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543650651' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28999 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-0-nyayzk[74651]: 2026-01-22T15:45:48.086+0000 7fdf474f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27890 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410981744' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772120358' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18954 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27920 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18948 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 22 10:45:48 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2500690000' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27935 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18975 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.18984 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.19002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:49 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:45:49 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925810316' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(cluster) log [WRN] : Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:50 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.19026 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.27989 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909934921' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:50 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.19044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28001 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058460210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.19071 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28025 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:51.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29140 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695759947' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29146 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:51 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29152 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:51 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:51 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:51 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:51 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:52 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:52 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29191 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:52 np0005592157 ceph-mgr[74655]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 10:45:52 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29209 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:52 np0005592157 ceph-mon[74359]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 10:45:52 np0005592157 ceph-mon[74359]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665761610' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 10:45:53 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.29224 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 10:45:53 np0005592157 ceph-mgr[74655]: log_channel(audit) log [DBG] : from='client.19152 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 10:45:53 np0005592157 radosgw[91596]: ====== starting new request req=0x7ff7fd2d96f0 =====
Jan 22 10:45:53 np0005592157 radosgw[91596]: ====== req done req=0x7ff7fd2d96f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:45:53 np0005592157 radosgw[91596]: beast: 0x7ff7fd2d96f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:45:53 np0005592157 ceph-mon[74359]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
